• Home
  • EMC
  • E20-690 VNX Solutions Specialist for Platform Engineers Dumps

Pass Your EMC E20-690 Exam Easy!

100% Real EMC E20-690 Exam Questions & Answers, Accurate & Verified By IT Experts

Instant Download, Free Fast Updates, 99.6% Pass Rate

EMC E20-690 Practice Test Questions in VCE Format

File Votes Size Date
File
EMC.ActualTest.E20-690.v2012-03-28.by.anooptcy.116q.vce
Votes
2
Size
1.28 MB
Date
Mar 28, 2012

EMC E20-690 Practice Test Questions, Exam Dumps

EMC E20-690 (VNX Solutions Specialist for Platform Engineers) exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. EMC E20-690 VNX Solutions Specialist for Platform Engineers exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the EMC E20-690 certification exam dumps & EMC E20-690 practice test questions in vce format.

A Comprehensive Overview of the Retired E20-690 Exam

The E20-690 Exam, formally known as the VNX Solutions Specialist Exam for Implementation Engineers, represented a significant milestone for professionals in the data storage industry. This certification was designed to validate the skills and knowledge required to implement and manage EMC VNX series storage systems. Although this exam has since been retired and replaced by newer certifications focusing on modern Dell storage platforms, studying its objectives provides valuable insight into the foundational principles of unified storage architecture. It tested a candidate's ability to handle the physical installation, software configuration, storage provisioning, and initial integration of VNX systems in various network environments.

Achieving success on the E20-690 Exam required a deep understanding of both block and file storage concepts, as the VNX platform was a unified system. Candidates needed to be proficient with hardware components like Disk Processor Enclosures (DPE) and Storage Processors (SP), as well as the Unisphere management software. The exam curriculum covered a wide array of topics, including RAID configurations, storage pools, LUN creation and masking, and the implementation of advanced features such as Fully Automated Storage Tiering (FAST). A certified professional was expected to be able to bring a VNX system from its shipping crate to a fully operational state within a customer's data center.

The networking component of the E20-690 Exam was equally critical. Implementation engineers had to demonstrate competence in configuring host access for both SAN and NAS environments. This included setting up Fibre Channel (FC) and iSCSI for block access, as well as configuring CIFS and NFS protocols for file sharing. The exam stressed the importance of best practices for cabling, zoning on a Fibre Channel switch, and network configuration for optimal performance and high availability. Understanding how to integrate the VNX with operating systems like Windows, Linux, and VMware ESXi was another core competency measured by this certification.

While the specific E20-690 Exam is no longer available, the knowledge it encompassed remains relevant. The principles of storage provisioning, data protection, performance management, and unified protocols are fundamental to modern storage solutions like the Dell Unity XT series. Professionals who once studied for or held this certification built a strong foundation that has been transferable to subsequent technologies. For those studying storage systems today, reviewing the topics of the E20-690 Exam can serve as a detailed historical lesson in the evolution of enterprise data storage and the critical role of the implementation engineer.

Historical Context of the E20-690 Exam

The E20-690 Exam was a cornerstone certification for data storage professionals specializing in EMC technologies. It was specifically tailored for implementation engineers, the technical experts responsible for the physical installation, configuration, and initial deployment of storage solutions at customer sites. Passing this exam signified that an individual possessed the requisite skills to successfully deploy EMC VNX series storage arrays. This certification was part of the broader EMC Proven Professional program, a well-respected framework that provided a clear path for career development in the storage industry. The exam validated not just theoretical knowledge but also the practical abilities needed for real-world scenarios.

The curriculum for the E20-690 Exam was meticulously designed to cover the entire lifecycle of a basic VNX implementation. It ensured that certified individuals understood the product's architecture, features, and management tools. This was crucial for maintaining a high standard of service and ensuring customer satisfaction. The exam's focus was on the VNX1 and VNX2 series, which were leading unified storage platforms of their time. These systems combined block and file storage capabilities into a single array, offering flexibility and efficiency. The certification therefore required a blended skill set, covering both SAN and NAS technologies, which was a key differentiator for professionals.  

As the technology landscape evolved, so did the certification tracks. The E20-690 Exam was eventually retired to make way for new exams covering more modern storage platforms, such as the Dell Unity and PowerStore families. While the specific product knowledge has been superseded, the fundamental concepts tested in the E20-690 Exam remain highly relevant. Principles of storage networking, RAID protection, LUN provisioning, and performance tiering are still central to today's storage systems. Thus, understanding the structure of this legacy exam provides valuable insight into the core competencies that have always defined a skilled storage implementation engineer.

The retirement of the E20-690 Exam marks a point in the technological timeline, reflecting the industry's shift towards more software-defined, hyper-converged, and cloud-integrated solutions. However, for those who prepared for and passed it, the exam represented a deep dive into a powerful and popular storage platform. It equipped them with a disciplined approach to deployment, troubleshooting, and management that continues to be beneficial in their careers. Reviewing its objectives helps new engineers appreciate the complexity and precision required to build and maintain robust enterprise storage infrastructure, a skill that is timeless in the world of IT.

Understanding the VNX Unified Architecture

A central topic of the E20-690 Exam was the unified nature of the VNX platform. The term "unified" signifies the system's ability to serve both block-level data and file-level data from a single storage array. Block storage is typically associated with Storage Area Networks (SAN) and protocols like Fibre Channel (FC) and iSCSI, serving data to applications like databases. File storage is associated with Network Attached Storage (NAS) and protocols like NFS and CIFS, typically used for file sharing. The VNX integrated these two distinct worlds into one cohesive solution, which was a significant advantage.  

The VNX architecture achieved this unification through a clever hardware design. The system was built around a Storage Processor Enclosure (SPE) or a Disk Processor Enclosure (DPE), which contained the storage processors (SPs) responsible for managing the block storage operations. These SPs, running the FLARE operating system, handled tasks like RAID calculations, cache management, and LUN presentation. To provide file services, the system incorporated X-Blades, also known as Data Movers, which were dedicated servers running the DART operating system. These Data Movers connected to the back-end block storage managed by the SPs and presented it to the network as file shares.  

This separation of duties between the block-handling Storage Processors and the file-serving Data Movers was a key concept for anyone taking the E20-690 Exam. The SPs provided the underlying storage volumes, or LUNs, to the Data Movers. The Data Movers would then use these LUNs to create their own file systems, which were then shared out to clients using NAS protocols. This modular design allowed for both high performance and scalability. An implementation engineer needed to understand how to configure the interplay between these components, ensuring that both SAN and NAS clients could access data efficiently and reliably.

The management of this unified system was handled through a single interface called Unisphere. This software provided a consolidated view of both the block and file components of the VNX array. From Unisphere, an administrator could provision LUNs for a database server and, in the same session, create a new CIFS share for a department. This simplified management was a major selling point for the VNX. Mastery of the Unisphere interface, including its wizards and configuration options for both block and file resources, was absolutely essential for passing the E20-690 Exam and for effectively managing the system in a production environment.  

Core Hardware Components of the VNX Series

To succeed in the E20-690 Exam, a deep and practical knowledge of the VNX hardware was mandatory. The primary building block of a VNX system was either a Disk Processor Enclosure (DPE) or a Storage Processor Enclosure (SPE). A DPE was common in smaller models and contained the Storage Processors (SPs) as well as the first set of disk drives. An SPE, found in larger models, contained only the SPs, and all disk drives were housed in separate enclosures. Understanding this distinction was crucial for properly identifying and cabling a new system during installation.  

The brains of the operation were the Storage Processors, or SPs. Every VNX system had two SPs, typically named SPA and SPB, operating in an active-active or active-passive configuration depending on the LUN ownership. Each SP was an independent server with its own CPU, memory, and I/O ports. They were responsible for all block-level data services, including read/write requests, cache management, and RAID logic. A key concept tested in the E20-690 Exam was the role of the write cache and how it was protected against power failure using standby power supplies (SPS) or battery backup units (BBU).

Expansion of storage capacity was achieved by adding Disk Array Enclosures (DAEs). These were chassis filled with disk drives that connected to the back-end SAS ports of the DPE or SPE. The E20-690 Exam required candidates to know the different types of DAEs, the drive types they supported (such as SAS, NL-SAS, and Flash), and the proper cabling procedures for creating redundant backend chains, often referred to as SAS buses. Incorrect cabling could lead to performance issues or a loss of redundancy, making this a critical skill for an implementation engineer.  

For file services, the system used Data Movers, which were essentially blade servers that slotted into the chassis. Each Data Mover had its own CPU, memory, and front-end network ports for client connectivity. They were responsible for handling all NAS protocols like CIFS and NFS. The exam would test on the physical installation of these components as well as their logical configuration. Additionally, candidates had to be familiar with various I/O modules, known as UltraFlex I/O modules, which provided flexible connectivity options for Fibre Channel, iSCSI, and Ethernet connections, allowing the VNX to be adapted to diverse customer environments.

Introduction to VNX Software and Management

While hardware knowledge was fundamental, proficiency in the VNX software environment was equally critical for the E20-690 Exam. The core operating system for the block side of the array was called FLARE (Fibre Logic Array Runtime Environment). This highly specialized OS ran on the Storage Processors and was responsible for all the low-level functions of the storage system, such as managing disk drives, executing RAID algorithms, and handling I/O operations. An implementation engineer needed to understand the process of initializing the system and loading this operating environment, as well as performing non-disruptive upgrades (NDUs) of the FLARE code.  

On the file side, the Data Movers ran a separate operating system called DART (Data Access in Real Time). DART was a hardened OS optimized for high-performance file serving. It managed the file systems, network connections, and protocol services like NFS and CIFS. For the E20-690 Exam, it was important to understand the relationship between FLARE and DART. The DART OS relied on the underlying block storage provided by FLARE to create its file systems. This interaction was a core part of the unified storage concept, and configuring it correctly was a key task for any deployment.

The primary tool used to manage both FLARE and DART was Unisphere. This web-based graphical user interface (GUI) provided a single pane of glass for all administrative tasks. An engineer preparing for the E20-690 Exam would need to spend significant time practicing within the Unisphere environment. This included navigating its various dashboards, wizards, and configuration dialogs. Key tasks performed in Unisphere included creating RAID groups and storage pools, provisioning LUNs, setting up host access, configuring file systems and shares, and monitoring the overall health and performance of the array.

Beyond the GUI, the VNX also offered a powerful command-line interface (CLI) known as NavisecCLI for block management and a separate CLI for file management accessible via SSH to the Data Movers. While Unisphere was used for most day-to-day tasks, the CLI was essential for scripting, automation, and certain advanced troubleshooting procedures. The E20-690 Exam expected candidates to be familiar with the basic syntax and common commands of NavisecCLI, as it demonstrated a deeper level of expertise and was often necessary for more complex implementation or diagnostic scenarios.

Storage Provisioning Concepts for the E20-690 Exam

A significant portion of the E20-690 Exam was dedicated to the principles and practices of storage provisioning. This is the process of allocating storage capacity from the array and presenting it to host servers. The foundational concept in VNX was the RAID group. A RAID (Redundant Array of Independent Disks) group is a collection of physical disks combined into a single logical unit to provide data protection and performance benefits. Candidates were required to know the different RAID levels supported by VNX, such as RAID 1/0, RAID 5, and RAID 6, and understand the trade-offs of each in terms of performance, capacity, and protection.

Building upon RAID groups, the next logical step was the creation of LUNs (Logical Unit Numbers). A LUN is a numbered logical volume carved out of a RAID group that can be presented to a host. The host operating system sees this LUN as a standard block device, like a local hard drive. A key skill for the E20-690 Exam was understanding how to properly size and configure LUNs based on application requirements. This included considerations for performance, such as placing high-I/O applications on LUNs built on high-performance drives, and for capacity management.  

The VNX also introduced a more modern and flexible approach to provisioning called storage pools. Instead of creating many individual RAID groups, an administrator could create a large pool of storage by combining multiple RAID groups. LUNs could then be provisioned from this pool. Storage pools offered significant advantages, including simplified management and the ability to use features like thin provisioning and automated tiering. Thin provisioning allows a LUN to present more capacity to a host than is physically allocated, with space being consumed on demand. This concept was a frequent topic in E20-690 Exam materials.  

The most advanced provisioning-related feature covered was FAST VP (Fully Automated Storage Tiering for Virtual Pools). With storage pools containing different tiers of disks (e.g., Flash, SAS, and NL-SAS), FAST VP would automatically move the most active data ("hot" data) to the fastest tier and the least active data ("cold" data) to the slowest, most cost-effective tier. Understanding how to configure FAST VP policies and monitor its operation was a hallmark of an expert VNX implementer. The E20-690 Exam tested a candidate's ability to explain and implement these advanced storage provisioning techniques to meet customer demands for both performance and cost efficiency.  

The Role of the Storage Processor

The Storage Processor, or SP, is the central processing unit of the VNX array for all block-level operations. Understanding its function in minute detail was a prerequisite for success on the E20-690 Exam. Each VNX system contains two SPs, designated SPA and SPB, to provide high availability. If one SP fails, the other can take over its workload, ensuring continuous data access for connected hosts. This failover process is a critical concept, and engineers were expected to know how it worked, including the role of the CMI (Control-Module Interconnect) bus that allows the two SPs to communicate and monitor each other's health.

Each SP is a self-contained server, equipped with powerful multi-core processors, a significant amount of memory used for caching, and multiple I/O ports for both host (front-end) and disk (back-end) connectivity. The memory within an SP is partitioned into different caches, most notably the read cache and the write cache. The read cache stores frequently accessed data to serve it back to hosts quickly, while the write cache accepts incoming writes from hosts at memory speed before destaging them to the slower disk drives. The management and protection of the write cache were critical topics for the E20-690 Exam.  

Write cache protection is paramount because any data in the cache that has not yet been written to disk is volatile. In the event of a power outage, this data would be lost. To prevent this, the VNX uses either Standby Power Supplies (SPS) or Battery Backup Units (BBU). Upon power loss, these units provide enough power for the SP to flush the contents of its write cache to a dedicated vaulting area on a set of internal disks. When power is restored, the SP can read the vaulted data and resume normal operations without any data loss. The E20-690 Exam required engineers to know how to test and verify the health of these backup power systems.

Beyond caching and failover, the SPs are responsible for executing all the complex logic that makes a storage array function. This includes managing RAID parity calculations, processing LUN ownership, handling I/O requests from hosts, and managing background processes like disk scrubbing and LUN migration. The performance of the entire block storage system is directly tied to the processing power and efficiency of the SPs. A certified implementation engineer needed to understand how to monitor SP utilization and identify potential performance bottlenecks as part of their deployment and validation tasks.

Disk Array Enclosures and Drive Technologies

The capacity of a VNX system is determined by the number and type of disks it contains. These disks are housed in Disk Array Enclosures, or DAEs. A key task for an implementation engineer, and a focus of the E20-690 Exam, was the physical installation and cabling of these enclosures. DAEs connect to the back end of the Storage Processors via SAS (Serial Attached SCSI) buses. Proper cabling is essential for creating redundant paths, ensuring that the loss of a single cable or SAS port does not result in a loss of access to an entire shelf of disks.

DAEs came in various form factors, such as 2.5-inch (small form factor) and 3.5-inch (large form factor), and could hold different numbers of drives, typically 15 or 25. The E20-690 Exam curriculum required candidates to be familiar with these different DAE models and their specifications. This included understanding the numbering scheme for the disk slots and the location of the Link Control Cards (LCCs), which are the I/O modules on the back of the DAE that facilitate the SAS connections. An engineer on site would need to physically rack, cable, and power on these enclosures in a specific sequence.

The VNX platform supported several different types of disk drives, which could be mixed within the same array to create storage tiers. The fastest and most expensive tier was composed of Flash drives, also known as Solid State Drives (SSDs). These were used for applications requiring the highest performance. The next tier consisted of Serial Attached SCSI (SAS) drives, which were high-performance spinning disks, typically running at 10,000 or 15,000 RPM. The final tier was made up of Near-Line SAS (NL-SAS) drives, which were high-capacity spinning disks running at 7,200 RPM, ideal for less active data or backups.

The ability to correctly identify and install these different drive types was a practical skill tested by the E20-690 Exam. More importantly, engineers needed to understand the performance and capacity characteristics of each drive type to advise on the proper design of RAID groups and storage pools. For example, placing a high-transaction database on NL-SAS drives would result in poor performance, while using Flash drives for archival data would be an unnecessary expense. The core of the implementation role was matching the right storage technology to the specific business application.

Physical Installation and Initial Setup Procedures

The E20-690 Exam was designed for implementation engineers, so it placed a heavy emphasis on the initial physical setup of a VNX system. This process begins the moment the equipment arrives at the customer's data center. An engineer's responsibilities included unboxing the components, inspecting them for any shipping damage, and verifying the contents against the packing list. Once confirmed, the engineer would proceed with racking the hardware, which involves securely mounting the DPE/SPE and any DAEs into a standard equipment rack, ensuring proper alignment, spacing for airflow, and weight distribution.

After racking, the next critical phase was cabling. This involved several distinct steps. First, the backend SAS cabling between the Storage Processors and the DAEs had to be completed according to strict diagrams to ensure full redundancy. A common configuration involved creating multiple SAS chains, with each chain connected to both SPA and SPB. Second, network cables for management had to be connected. Each SP has a dedicated management port that needed to be plugged into the customer's management network. Finally, front-end host connectivity cables, such as Fibre Channel or Ethernet for iSCSI, had to be connected to the appropriate I/O modules.

Powering on the system followed a specific sequence, a procedure that was a likely topic for questions on the E20-690 Exam. Typically, all the DAEs were powered on first, allowing the disks to spin up and initialize. After the DAEs were ready, the DPE or SPE containing the Storage Processors was powered on. This controlled startup sequence ensures that the SPs can discover all the connected disk enclosures correctly upon boot-up. The engineer would then monitor the system's boot process via a serial console connection to ensure that both SPs came online without errors.

The final step in the initial setup was establishing a network connection to the SPs and performing the initial system configuration. This was typically done using a specialized initialization utility. The engineer would use this tool to assign IP addresses to the SPs' management ports, set the system's time and date, and configure DNS and NTP settings. Once this initial network configuration was complete, the rest of the system setup could be performed using the Unisphere management software from a web browser, marking the transition from physical installation to logical configuration. This entire process required precision and adherence to best practices.

Understanding UltraFlex I/O Modules

A key feature of the VNX2 series, and a relevant topic for the E20-690 Exam, was the introduction of UltraFlex I/O modules. These modules provided a high degree of flexibility and future-proofing for host connectivity. Instead of having fixed port types on the Storage Processors, the VNX used these modular, customer-replaceable components that could be easily swapped to change the type, speed, and number of front-end ports. This allowed a single VNX model to be adapted to a wide variety of customer environments, whether they used Fibre Channel, iSCSI, or FCoE.  

There were several types of UltraFlex I/O modules available. For Fibre Channel environments, there were modules that provided multiple 8 Gb/s or 16 Gb/s ports. For Ethernet-based networks, there were modules with 1 Gb/s or 10 Gb/s ports, which could be used for iSCSI or NAS connectivity. There was also a Fibre Channel over Ethernet (FCoE) module, which allowed block storage traffic to run over a converged Ethernet network. An implementation engineer preparing for the E20-690 Exam needed to be able to identify these different modules and understand their specific use cases and configuration requirements.

The physical installation of these modules was straightforward, but the logical configuration required careful attention. For example, when using an optical Fibre Channel module, the engineer had to ensure that the correct type of SFP (Small Form-factor Pluggable) transceiver was used to match the customer's network infrastructure. When configuring a 10 Gb/s Ethernet module for iSCSI, the engineer would need to configure IP addresses, subnet masks, and VLAN tagging as required. The exam would test not just the "what" but the "how" of configuring these connections for reliable host communication.

The flexibility offered by UltraFlex I/O modules also played a role in system maintenance and upgrades. If a customer needed to upgrade their SAN from 8 Gb/s to 16 Gb/s Fibre Channel, it could be done by simply swapping out the I/O modules without replacing the entire storage array. This non-disruptive upgrade capability was a significant benefit. For the E20-690 Exam, understanding the implications of adding or changing modules, including any potential performance considerations or necessary software configuration changes, was essential for demonstrating a comprehensive grasp of the VNX platform's capabilities.

The Role of the Data Mover

In the unified VNX architecture, the Data Mover is the component responsible for all file-level services. While the Storage Processors handle block I/O, the Data Movers manage the NAS side of the house. A Data Mover, physically an X-Blade server, runs its own dedicated DART operating system and has its own network interfaces for client connections. The E20-690 Exam required a thorough understanding of the Data Mover's role and its interaction with the back-end block storage. The Data Movers are essentially specialized file servers integrated directly into the storage array chassis.

A VNX system could have one or more Data Movers, and they could be configured for high availability. In a typical redundant setup, two Data Movers would be configured in a primary/standby arrangement. If the primary Data Mover failed, the standby Data Mover would automatically take over its identity, including its IP addresses and file systems, ensuring that file services remained available to users and applications. This failover process was a critical concept for the E20-690 Exam, as ensuring business continuity is a primary goal of any enterprise storage implementation.

The Data Movers do not have their own physical disks. Instead, they use storage provisioned from the back-end block storage pool managed by the Storage Processors. An administrator would create LUNs on the block side and assign them to the Data Movers. The Data Movers would then use these LUNs as their raw storage, on top of which they would create and manage their own file systems using the AVM (Automatic Volume Management) feature. This layered approach was fundamental to the unified design, and an implementation engineer needed to be proficient in provisioning storage for the file side of the system.

Once a file system was created, the Data Mover could then share it out to the network using standard NAS protocols. This included creating CIFS (Common Internet File System) shares for Windows clients and NFS (Network File System) exports for Linux/UNIX clients. The E20-690 Exam would cover the steps involved in configuring these protocols, including setting up user authentication through services like Active Directory or LDAP, and managing share-level permissions. A successful implementation required the engineer to integrate the VNX's file services seamlessly into the customer's existing network and security infrastructure.

Navigating the Unisphere Management Interface

Mastery of the Unisphere management software was arguably the most critical skill for any professional attempting the E20-690 Exam. Unisphere provided a unified, web-based graphical interface for managing all aspects of the VNX array, from block storage provisioning to file system creation and host connectivity. Its design philosophy was to simplify complex tasks through the use of dashboards, wizards, and a logical, object-oriented navigation structure. An implementation engineer would spend the majority of their configuration time working within this interface, making fluency with it essential.  

The Unisphere dashboard offered a high-level, at-a-glance view of the system's health, capacity utilization, and performance. From this central point, an administrator could quickly identify any alerts or potential issues. The E20-690 Exam expected candidates to be able to interpret the information presented on the dashboard and know where to drill down for more details. For example, seeing a capacity alert on the dashboard should prompt the engineer to navigate to the storage pools section to investigate which pool is running low on space. This ability to move from a high-level overview to a specific configuration detail was a key competency.  

Unisphere was organized into logical domains, primarily for System, Storage, Hosts, and for unified systems, a dedicated File section. The Storage domain was where all block provisioning tasks were performed. This included creating RAID groups, building storage pools, carving out LUNs, and configuring advanced features like FAST Cache and FAST VP. The Hosts domain was used for managing host connectivity, which involved registering hosts by their initiators (WWNs or iSCSI names) and creating storage groups to control which hosts could see which LUNs. The E20-690 Exam would test the precise sequence of steps required to perform these core tasks.

The File domain in Unisphere was used to manage the Data Movers and their associated resources. Here, an engineer could create and manage file systems, configure networking for the Data Movers, and set up NFS exports and CIFS shares. Unisphere also provided tools for managing user authentication and permissions for file access. Because Unisphere integrated both block and file management, it enabled a holistic approach to storage administration. A candidate for the E20-690 Exam needed to demonstrate proficiency across all these functional areas to prove they could implement a truly unified storage solution.

Traditional Provisioning with RAID Groups and LUNs

Before the widespread adoption of storage pools, the primary method for provisioning storage on arrays like the VNX was through traditional RAID groups. The E20-690 Exam required a solid understanding of this method, as it formed the basis of storage allocation and was still used for specific use cases. A RAID group is a collection of physical disks of the same type and size, combined to act as a single logical device. The RAID level chosen for the group (e.g., RAID 5, RAID 6, RAID 1/0) determined how data was written across the disks and how it was protected against disk failures.

Creating a RAID group was a foundational task. The process involved selecting the disks, choosing the RAID type, and then letting the system initialize the group. The exam curriculum would cover the pros and cons of each RAID type. For example, RAID 5 offers good performance and capacity efficiency but can only tolerate a single disk failure. RAID 6 provides protection against two simultaneous disk failures but incurs a higher write penalty. RAID 1/0 offers the best write performance but has a 50% capacity overhead. An implementation engineer needed to be able to recommend the appropriate RAID level based on the customer's application requirements for performance and availability.  

Once a RAID group was created, LUNs (Logical Unit Numbers) could be bound, or carved out, from it. A LUN is the logical volume that is ultimately presented to a host. When creating a LUN from a traditional RAID group, the administrator had to specify its exact size. Multiple LUNs could be created within a single RAID group, but they all shared the performance characteristics and spindles of that underlying group. This could sometimes lead to performance contention, a problem that storage pools were designed to solve. Understanding this potential for "noisy neighbors" was an important concept.  

The management of traditional RAID groups required careful planning. Since RAID groups could not be easily expanded, an administrator had to size them correctly from the start. Furthermore, all disks in a RAID group had to be of the same type, size, and speed. This method of provisioning was less flexible than using pools, but it offered predictable performance, which was desirable for certain applications. The E20-690 Exam would ensure that a candidate understood both the mechanics of creating traditional LUNs and the strategic reasons why this method might still be chosen over the more modern pool-based approach.

Modern Provisioning with Storage Pools

Storage pools represented a more advanced and flexible way to manage and provision storage on the VNX platform, and they were a major focus of the E20-690 Exam. Instead of provisioning LUNs directly from a single RAID group, a storage pool is an aggregation of multiple underlying private RAID groups, often comprising different disk tiers (Flash, SAS, NL-SAS). This abstraction layer simplified management and enabled powerful features. An engineer would create a pool by selecting a number of disks of various types, and the system would automatically create the necessary underlying RAID structures.  

One of the key benefits of using storage pools was the ability to create pool LUNs, also known as thin LUNs. With thin provisioning, a LUN could be created with a logical size that was much larger than the physical capacity initially allocated to it within the pool. The system would then allocate physical storage blocks from the pool on-demand as data was actually written by the host. This "just-in-time" allocation method greatly improved storage utilization efficiency, as administrators no longer needed to overallocate capacity for every application. The E20-690 Exam would test the concepts of logical size versus consumed capacity.

Storage pools also simplified capacity expansion. Unlike traditional RAID groups, storage pools could be easily expanded by adding more disks. The new capacity would be seamlessly incorporated into the pool and become available for all LUNs within that pool. This eliminated the complex data migrations that were often required when a traditional RAID group ran out of space. The ability to grow the storage infrastructure non-disruptively was a significant operational advantage that implementation engineers needed to be able to explain and demonstrate.

From a performance perspective, storage pools spread the workload across a much larger number of disks than a typical RAID group. When a host writes to a LUN in a pool, the data is distributed across all the underlying RAID groups that make up the pool. This wide-striping effect generally leads to better and more consistent performance, as it minimizes the risk of hot spots on a small set of disks. For the E20-690 Exam, understanding how pools leveraged wide-striping to improve performance and how to design pools for different workloads was a critical area of knowledge.

Implementing FAST VP and FAST Cache

The VNX platform included powerful performance optimization features that were essential topics for the E20-690 Exam. The most prominent of these was FAST VP, which stands for Fully Automated Storage Tiering for Virtual Pools. This feature worked in conjunction with storage pools that were built with multiple tiers of disk technology, such as a mix of high-performance Flash drives, mid-tier SAS drives, and high-capacity NL-SAS drives. FAST VP intelligently and automatically moved data between these tiers based on its activity level.  

The operation of FAST VP was based on data "slices." The system would divide a LUN into 1 GB slices and monitor the I/O activity for each slice over time. Slices with a high number of I/Os were considered "hot" and would be automatically migrated to the highest available performance tier, such as the Flash drives. Conversely, slices that were rarely accessed were considered "cold" and would be moved down to the capacity tier, like NL-SAS. This automated data relocation ensured that the most active data received the benefit of the fastest, most expensive storage, while optimizing the overall cost of the system. An engineer needed to know how to enable and configure FAST VP policies.

Complementing FAST VP was another feature called FAST Cache. While FAST VP operated on a long-term basis, moving data between tiers over hours or days, FAST Cache provided a real-time performance boost. FAST Cache utilized a set of Flash drives as a very large, secondary cache for the entire storage system, sitting in front of the traditional spinning disks. When a host read or wrote data, FAST Cache could absorb the I/O at Flash speed. It was particularly effective at handling unpredictable I/O spikes and random workloads.  

For the E20-690 Exam, a candidate would need to understand the distinct roles of these two features and when to use them. FAST Cache is a real-time I/O absorption mechanism, ideal for handling random read and write bursts. FAST VP is a data lifecycle management tool, ideal for moving data to the most appropriate long-term storage tier based on sustained access patterns. A successful implementation often involved using both features together to create a highly responsive and cost-effective storage solution. The ability to configure and validate the operation of both FAST VP and FAST Cache was a key skill for a certified professional.  

LUN Masking and Storage Groups

Provisioning a LUN is only the first step; the next critical phase, and a core topic for the E20-690 Exam, is controlling which hosts are allowed to access it. This process is known as LUN masking or access control. On the VNX, this was managed through a combination of host registration and the use of Storage Groups. Simply creating a LUN does not make it visible to any server on the network. It must be explicitly assigned to a specific host or group of hosts. This is a fundamental security and data integrity measure in any SAN environment.

The process began with host registration. Every device on a SAN has a unique identifier. For Fibre Channel, this is the World Wide Name (WWN) of its Host Bus Adapter (HBA). For iSCSI, it is the iSCSI Qualified Name (IQN). Before a host could be given access to storage, its initiators (the WWNs or IQNs) had to be registered with the VNX array. This created a host object within Unisphere. The E20-690 Exam required knowledge of how to find these initiator identifiers on various operating systems and how to manually or automatically register them on the array.  

Once hosts were registered, the next step was to create a Storage Group. A Storage Group is a container object in Unisphere that holds two things: a list of hosts and a list of LUNs. By placing a host and a LUN into the same Storage Group, a connection was established, and the host was granted permission to access that LUN. This model provided a clean and scalable way to manage permissions. Instead of managing access on a per-LUN, per-host basis, an administrator could manage access for groups of hosts to groups of LUNs.

A critical concept related to Storage Groups was the Host LUN ID (HLU). When a LUN was added to a Storage Group, the VNX would assign it a LUN number that the host would see. By default, it would try to use the same number as the array-side LUN number (the ALU or Array LUN ID), but this could be manually changed. This was important because some older operating systems or clustering software had specific requirements for the LUN numbers they used. The E20-690 Exam would expect a candidate to understand the purpose of the HLU and the procedure for assigning LUNs to a Storage Group to make them accessible to a server.

Configuring Fibre Channel Connectivity

Fibre Channel (FC) is a high-speed networking technology that has long been the standard for enterprise Storage Area Networks (SANs). A significant portion of the E20-690 Exam was dedicated to ensuring that an implementation engineer could correctly configure a VNX array for operation within an FC SAN. This process started with the physical connection, which involved plugging Fibre Channel cables from the UltraFlex I/O modules on the Storage Processors into ports on the customer's Fibre Channel switches. Using the correct SFP transceivers and cable types was the first step.  

Logically, the most important task in an FC SAN is zoning. Zoning is performed on the FC switches and is analogous to creating a VLAN in an Ethernet network. It controls which devices are allowed to communicate with each other. Best practice, and a key concept for the E20-690 Exam, dictated using single-initiator, single-target zoning. This means that a zone would contain exactly one host HBA port (the initiator) and one storage array port (the target). This granular approach enhances security and stability by preventing hosts from interfering with each other's traffic. The implementation engineer needed to be able to provide the WWNs of the VNX ports to the switch administrator for proper zoning.  

Once zoning was in place, the host could discover the VNX array. The next step on the array itself was host registration, as discussed previously. The VNX would automatically detect the host's HBAs that were zoned to it, and they would appear in the connectivity status list in Unisphere. The engineer would then register these initiators, associating them with a specific host object. This process confirmed the communication path and prepared the system for LUN presentation.

After registration, LUNs could be assigned to the host via a Storage Group. The host operating system would then need to be instructed to rescan for new storage devices. Upon a successful rescan, the new LUN would appear to the OS as a local disk, ready to be partitioned, formatted with a file system, and used by applications. The E20-690 Exam tested the end-to-end workflow, from the physical port on the array to a usable disk on the host, ensuring the engineer understood every step required to provide block storage over a Fibre Channel SAN.

Implementing iSCSI for Block Storage

iSCSI (Internet Small Computer System Interface) is a protocol that allows block storage commands to be sent over standard TCP/IP networks. It provides a lower-cost alternative to Fibre Channel, as it can run on the same Ethernet infrastructure used for regular network traffic. The E20-690 Exam required candidates to be just as proficient in configuring iSCSI as they were with Fibre Channel. This began with configuring the iSCSI ports on the VNX's Ethernet I/O modules. This included assigning IP addresses, subnet masks, and default gateways to these ports.  

A critical best practice for iSCSI that was emphasized in the E20-690 Exam materials was network isolation. For performance and security, iSCSI traffic should always be segregated from general user traffic. This is typically achieved by using dedicated physical switches or by configuring VLANs (Virtual Local Area Networks) on shared switches. The implementation engineer would need to work with the network administrator to ensure that a dedicated, non-routable network or VLAN was set up for storage traffic, and that the VNX iSCSI ports were configured with the correct VLAN tags if necessary.

On the host side, the server needs an iSCSI initiator. This can be a software initiator, which is built into most modern operating systems, or a hardware initiator, which is a dedicated iSCSI HBA card. The engineer would need to configure the initiator with the IP address of the VNX's iSCSI target ports. The host would then perform a discovery process to find the available iSCSI targets on the array. Once discovered, the host would log in to the target, establishing a session through which storage commands could be sent.

Similar to Fibre Channel, once the iSCSI session was established, the host's initiator name (its IQN) would need to be registered on the VNX and added to a Storage Group along with the desired LUNs. After a rescan on the host, the iSCSI LUNs would appear as local disks. The E20-690 Exam would expect an engineer to understand the nuances of iSCSI, including concepts like multipathing (using multiple network paths for redundancy and performance) and security features like CHAP (Challenge-Handshake Authentication Protocol) to secure the login process between the initiator and the target.


Go to testing centre with ease on our mind when you use EMC E20-690 vce exam dumps, practice test questions and answers. EMC E20-690 VNX Solutions Specialist for Platform Engineers certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using EMC E20-690 exam dumps & practice test questions and answers vce from ExamCollection.

Read More


SPECIAL OFFER: GET 10% OFF

Pass your Exam with ExamCollection's PREMIUM files!

  • ExamCollection Certified Safe Files
  • Guaranteed to have ACTUAL Exam Questions
  • Up-to-Date Exam Study Material - Verified by Experts
  • Instant Downloads

SPECIAL OFFER: GET 10% OFF

Use Discount Code:

MIN10OFF

A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.

Download Free Demo of VCE Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.

Simply submit your e-mail address below to get started with our interactive software demo of your free trial.

sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |