100% Real Microsoft 70-564 Exam Questions & Answers, Accurate & Verified By IT Experts
Instant Download, Free Fast Updates, 99.6% Pass Rate
Microsoft 70-564 Practice Test Questions in VCE Format
File | Votes | Size | Date |
---|---|---|---|
File Microsoft.SelfTestEngine.70-564.v2012-11-14.by.manibala.112q.vce |
Votes 2 |
Size 221.26 KB |
Date Jan 29, 2013 |
File Microsoft.SelfTestEngine.70-564.v2012-08-29.by.jonty.115q.vce |
Votes 1 |
Size 211.54 KB |
Date Aug 29, 2012 |
File Microsoft.Certkey.70-564.v2012-03-15.by.Hadar.109q.vce |
Votes 2 |
Size 199.17 KB |
Date Mar 15, 2012 |
File Microsoft.Pass4Sure.70-564.v2012-02-16.by.Roman.94q.vce |
Votes 1 |
Size 186.24 KB |
Date Feb 16, 2012 |
File Microsoft.Pass4Sure.70-564.v2012-01-25.by.WHYNOTO.94q.vce |
Votes 1 |
Size 183.24 KB |
Date Jan 29, 2012 |
Archived VCE files
File | Votes | Size | Date |
---|---|---|---|
File Microsoft.Certkey.70-564.v2011-06-08.by.Adelbert.105q.vce |
Votes 1 |
Size 185.05 KB |
Date Jun 09, 2011 |
File Microsoft.SelfTestEngine.70-564.v2010-08-02.by.Cathy.101q.vce |
Votes 1 |
Size 176.81 KB |
Date Aug 04, 2010 |
File Microsoft.SelfTestEngine.70-564.v2010-05-26.by.Bongwe.96q.vce |
Votes 1 |
Size 167.57 KB |
Date May 26, 2010 |
File Microsoft.SelfTestEngine.70-564.v2010-02-17.by.Rick.92q.vce |
Votes 1 |
Size 161.6 KB |
Date Feb 21, 2010 |
Microsoft 70-564 Practice Test Questions, Exam Dumps
Microsoft 70-564 (PRO: Designing and Developing ASP.NET Applications) exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Microsoft 70-564 PRO: Designing and Developing ASP.NET Applications exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Microsoft 70-564 certification exam dumps & Microsoft 70-564 practice test questions in vce format.
The 70-564 Exam, titled "PRO: Designing and Developing ASP.NET Applications Using the Microsoft .NET Framework 3.5," represents a significant milestone in the history of web development. As a professional-level exam for the MCPD certification, it validated a developer's expertise in what was, at the time, a mature and powerful platform. Although the exam and the .NET 3.5 framework are now retired, the principles and architectural patterns it covered laid the groundwork for modern web application development, including ASP.NET Core.
Studying the curriculum of the 70-564 Exam offers a fascinating look into the evolution of web technologies. It provides a structured way to understand the Web Forms model, which aimed to bring the simplicity of desktop application development to the web. It also covers the introduction of transformative technologies like LINQ and the ASP.NET AJAX Extensions, which were Microsoft's answer to the growing demand for data-driven, interactive web experiences. This series will use the 70-564 Exam as a historical lens to explore these foundational concepts.
This first part of the series is dedicated to the core architecture of an ASP.NET 3.5 application. We will delve into the Web Forms model, dissect the intricate ASP.NET page lifecycle, and understand the crucial role of server controls. We will also explore the mechanisms for state management, such as ViewState and Session State, which were fundamental to the Web Forms paradigm. A solid grasp of these concepts was essential for any developer aspiring to pass the 70-564 Exam.
By deconstructing these foundational elements, we can appreciate the problems that the ASP.NET 3.5 framework was designed to solve. It was a platform built to enable rapid application development by abstracting away the complexities of the underlying HTTP protocol. Understanding this design philosophy is key to understanding not only how legacy ASP.NET applications work but also why modern frameworks like ASP.NET Core evolved in the way they did.
At the heart of the 70-564 Exam curriculum was the ASP.NET Web Forms model. This model was a revolutionary attempt to simplify web development by emulating the event-driven programming model of desktop applications, like those built with Visual Basic. The central idea was to abstract away the stateless nature of the HTTP protocol. Developers could design a user interface by dragging and dropping controls onto a design surface, and then write event handler code for those controls, much like they would for a Windows application.
A Web Forms page, identified by its .aspx extension, consists of two parts: the presentation layer and the code layer. The presentation layer contains HTML markup mixed with special tags for ASP.NET server controls. The code layer, contained in a separate "code-behind" file (e.g., Default.aspx.cs), holds the C# or VB.NET code that provides the application's logic. This separation was a significant improvement over classic ASP, where code and markup were often intermingled in a single file.
The magic of Web Forms lies in the concept of a "postback." When a user interacts with a server control on a page, such as clicking a button, the entire form is posted back to the server. The ASP.NET runtime then reconstructs the page, raises the appropriate event (like Button_Click), and executes the corresponding event handler in the code-behind. After the event handler runs, the page is re-rendered into HTML and sent back to the browser.
This model, while powerful for rapid development, had its trade-offs. It created a level of abstraction that could sometimes hide what was actually happening over HTTP. It also relied heavily on a mechanism called ViewState to maintain the state of controls between postbacks, which could lead to large page sizes and performance issues. A deep understanding of this postback architecture and its implications was a fundamental requirement for the 70-564 Exam.
To effectively program in the Web Forms model, a developer needed a thorough understanding of the ASP.NET page lifecycle. This was a critical and often-tested topic in the 70-564 Exam. The page lifecycle is the sequence of events that occurs every time a page is requested from the server, from the moment it is instantiated until it is rendered into HTML. Each event in the lifecycle provides an opportunity for the developer to inject custom code to initialize controls, load data, or perform other tasks.
The lifecycle begins with the PreInit event, which is the earliest stage a developer can access. This event is commonly used for tasks like dynamically setting the master page or theme of a page. Following this is the Init event, where all the controls on the page are initialized and have their unique IDs set. This is the stage where you would programmatically add dynamic controls to the page.
The Load event is perhaps the most commonly used. It fires after all controls have been initialized. This is a typical place to perform tasks that need to be done on every page request, such as connecting to a database and populating controls with data. It is important to note that during a postback, the Load event occurs after the control events (like Button_Click) have been raised and handled.
After the Load event and any postback events, the PreRender event occurs just before the page is rendered into HTML. This is the last chance to make any changes to the page's controls before they are sent to the browser. Finally, the Unload event is fired after the page has been rendered and sent. This is the place to perform any cleanup tasks, such as closing database connections or releasing resources. Mastering this event sequence was key to writing predictable and bug-free Web Forms code.
Server controls are the building blocks of a Web Forms application. These are special tags, typically with an asp: prefix, that are processed on the server and rendered as standard HTML. The 70-564 Exam required a comprehensive knowledge of the available controls and their properties. Server controls are object-oriented, meaning they have properties, methods, and events that can be accessed and manipulated in the server-side code-behind file.
Server controls can be categorized into several groups. HTML server controls are essentially standard HTML tags with an added runat="server" attribute. This allows them to be accessed and manipulated in the code-behind. Web controls, like <asp:TextBox> and <asp:Button>, provide a richer, more abstract object model than their HTML counterparts. They automatically handle browser differences and manage their state through ViewState.
Validation controls were another important category. These controls, such as <asp:RequiredFieldValidator> and <asp:RegularExpressionValidator>, provided a simple, declarative way to add both client-side and server-side validation to user input without writing complex validation logic. They could be attached to input controls to enforce rules like required fields, data types, and specific patterns.
Data controls, like <asp:GridView> and <asp:Repeater>, were designed to simplify the display of data from a database. These controls could be bound to a data source, and they would automatically generate the necessary HTML table structure to display the data. Rich controls, like the <asp:Calendar>, provided complex, pre-built UI functionality. A deep familiarity with this rich control ecosystem was essential for rapid application development in the Web Forms era.
Because the HTTP protocol is stateless, every request from a browser to a web server is an independent event. The server has no inherent memory of previous requests. To create the illusion of a continuous application, ASP.NET Web Forms relied heavily on state management techniques. The 70-564 Exam required a deep understanding of these mechanisms, particularly ViewState and Session State.
ViewState is the primary mechanism that Web Forms uses to preserve the state of a page and its controls across postbacks. When a page is rendered, the current values of its controls' properties are serialized into a hidden, base64-encoded string. This string is then placed in a hidden input field (__VIEWSTATE) on the page. When the page is posted back to the server, the ASP.NET runtime deserializes this string and uses it to restore the page and its controls to their previous state.
While powerful, ViewState could lead to performance problems as it could significantly increase the size of the page being sent to and from the browser. A skilled developer needed to know when and how to disable ViewState for specific controls or for the entire page to optimize performance.
Session State is a server-side state management mechanism. It provides a dictionary-like object where you can store data that is specific to a single user's session. This data is stored on the server and is associated with the user via a session ID, which is typically managed with a cookie. Session State is ideal for storing user-specific information, like a shopping cart or user login details, that needs to persist across multiple page requests.
Every ASP.NET application is configured using an XML file named web.config. This file is the central hub for all application-level settings and was a critical topic for the 70-564 Exam. The web.config file uses a hierarchical structure, allowing you to apply settings to the entire application or to specific subdirectories. It controls a vast range of behaviors, from how the application is compiled to how it handles security and errors.
A key section of the web.config file is <appSettings>. This section provides a simple key-value store where you can place custom application settings, such as file paths, feature flags, or other configuration values. These settings can then be easily read from your code-behind, allowing you to change application behavior without recompiling the code.
Another crucial section is <connectionStrings>. This is the standard place to store your database connection strings. Storing them in web.config separates them from your application code and makes it easy to change the database connection for different environments (e.g., development, testing, production). You can also encrypt this section of the web.config file for added security.
The web.config file is also where you configure core ASP.NET pipeline behaviors. The <system.web> section contains settings for authentication modes (e.g., Forms, Windows), custom error pages, session state configuration, and debugging options. A thorough understanding of the web.config schema and its various sections was essential for any developer looking to build, configure, and deploy a professional ASP.NET application.
Before the advent of modern Object-Relational Mappers (ORMs), data access in .NET was primarily handled by ADO.NET. The 70-564 Exam required a strong foundation in these "classic" data access techniques, as they provide a direct and powerful way to interact with a database. ADO.NET consists of a set of classes that expose data access services to the developer. It is divided into two main components: the .NET Framework Data Providers and the DataSet.
A Data Provider is a set of components that are used to connect to a specific data source, execute commands, and retrieve results. The core objects of a data provider are the Connection, Command, DataReader, and DataAdapter. The Connection object establishes a connection to the database. The Command object is used to execute a SQL statement or a stored procedure against the database.
The DataReader provides a highly efficient, forward-only, read-only stream of data from the database. It is a connected approach, meaning the connection to the database must remain open while you are reading the data. This is ideal for quickly reading and processing large amounts of data with minimal memory overhead.
The DataAdapter and DataSet represent a disconnected approach. The DataAdapter acts as a bridge between the database and a DataSet. It uses a Command object to fill a DataSet, which is an in-memory representation of data, consisting of one or more DataTables. Once the DataSet is filled, the connection to the database can be closed. The application can then work with the data in memory, including modifying it, before using the DataAdapter to reconcile the changes back to the database.
One of the most transformative features introduced in .NET 3.5, and a major focus of the 70-564 Exam, was LINQ (Language Integrated Query). LINQ is a set of technologies that adds native data querying capabilities directly into the C# and VB.NET languages. Instead of writing query logic as strings that are opaque to the compiler, LINQ allows you to write queries using a declarative syntax that is strongly typed and checked at compile time.
LINQ provides a unified query model that can be used to query data from various sources, not just databases. There are several LINQ providers, including LINQ to Objects (for querying in-memory collections like lists and arrays), LINQ to XML (for querying XML documents), and LINQ to SQL and the Entity Framework (for querying relational databases). This meant that developers could use the same basic query syntax regardless of the data source.
A LINQ query in C# typically uses a syntax that is similar to SQL. It includes clauses like from, where, orderby, and select. For example, you could query an in-memory list of customer objects to find all customers from a specific city. The compiler translates this query syntax into a series of method calls using standard query operators, such as Where(), OrderBy(), and Select().
The benefits of LINQ were immense. It provided compile-time checking of queries, which caught errors that would have previously only been found at runtime. It offered full IntelliSense support in Visual Studio, making it much easier and faster to write complex queries. And it allowed developers to work with data in a more object-oriented way, reducing the impedance mismatch between the relational world of databases and the object-oriented world of .NET.
LINQ to SQL was Microsoft's first Object-Relational Mapping (ORM) technology to leverage the power of LINQ. It was a key data access technology in the .NET 3.5 timeframe and a significant part of the 70-564 Exam. LINQ to SQL provides a runtime infrastructure for managing relational data as objects without losing the ability to query. It allows developers to define .NET classes that map directly to tables in a SQL Server database.
The workflow typically involved using a visual designer in Visual Studio to create a .dbml (database markup language) file. You could drag tables from the Server Explorer onto the designer surface, and it would automatically generate the corresponding C# or VB.NET classes, complete with properties that mapped to the table's columns and relationships that mapped to the foreign keys. This generated a DataContext class, which was the main entry point for querying and updating the database.
Using the generated DataContext, a developer could write strongly typed LINQ queries against the database tables. For example, to get all orders for a specific customer, you could write a query against the DataContext.Orders property. The LINQ to SQL provider would then translate this LINQ query into an optimized T-SQL statement, execute it against the database, and materialize the results back into a collection of Order objects.
LINQ to SQL also provided a straightforward way to handle data modifications. When you changed the properties of an object that was retrieved from the DataContext, it would keep track of those changes. When you called the SubmitChanges() method on the DataContext, it would automatically generate the necessary INSERT, UPDATE, or DELETE statements and execute them against the database within a transaction. This greatly simplified the process of CUD (Create, Update, Delete) operations.
While LINQ to SQL was a powerful tool, it was limited to working only with SQL Server and was based on a one-to-one mapping between database tables and .NET classes. In parallel, Microsoft introduced the first version of the Entity Framework (EF), which was a more ambitious and flexible ORM. The 70-564 Exam covered the basics of this initial version of EF, which was also part of .NET 3.5.
The Entity Framework introduced the concept of an Entity Data Model (EDM). The EDM is a conceptual model that sits between the application's domain objects (the entities) and the physical database schema. This layer of abstraction allowed for more complex mappings. For example, a single entity in your application could be mapped to multiple tables in the database, or you could map entities to views or stored procedures.
Similar to LINQ to SQL, EF provided a visual designer for creating the model (an .edmx file) and generated a DbContext (or ObjectContext in the first version) and entity classes. Developers could then query against this conceptual model using either LINQ to Entities (a LINQ provider for EF) or another query language called Entity SQL.
The Entity Framework was designed to be provider-agnostic, meaning it could work with different database systems (not just SQL Server) by plugging in the appropriate database provider. While the first version was complex and had some performance issues, it laid the foundation for what would become the standard ORM for .NET development. Understanding its basic principles was important for developers looking to the future of data access on the platform.
To bridge the gap between the data access layer and the UI, ASP.NET Web Forms provided a set of data source controls. These were non-visual components that you could place on your page to manage the retrieval and modification of data. The 70-564 Exam required developers to be proficient with these controls, as they were central to the rapid application development promise of Web Forms.
The SqlDataSource control allowed you to connect directly to a database. You could configure it with a connection string and then write SELECT, INSERT, UPDATE, and DELETE statements directly within the control's properties. You could then bind a UI control, like a GridView, to the SqlDataSource, and the GridView would be able to display, page, sort, and even edit the data with very little custom code.
For developers using an object-oriented approach, the ObjectDataSource was the preferred choice. Instead of containing SQL statements, the ObjectDataSource was configured to call methods on a business object. For example, you could point its SelectMethod property to a GetProducts() method in your business logic layer. This provided a much better separation of concerns, keeping the data access logic out of the presentation layer.
For LINQ to SQL and the Entity Framework, Microsoft introduced the LinqDataSource and EntityDataSource controls, respectively. These controls allowed you to bind UI elements directly to your LINQ to SQL DataContext or your Entity Framework model. They provided a simple way to write LINQ queries declaratively in the page markup to filter and order the data. While powerful, this approach could sometimes lead to a blurring of the lines between the UI and the data layer.
The ultimate purpose of the data source controls was to provide data to the data-bound UI controls. The 70-564 Exam placed a heavy emphasis on these controls, as they were the workhorses for building data-driven user interfaces in Web Forms. The GridView is the most powerful and commonly used data control. It displays data in a customizable HTML table.
By simply binding a GridView to a data source control, you could get a feature-rich display of your data. The GridView has built-in support for paging (breaking the data into manageable pages), sorting (by clicking on column headers), and editing and deleting rows. You could enable these features by simply setting properties on the control. The GridView would then automatically work with the data source control to perform the necessary operations.
The DetailsView and FormView controls were used for displaying a single record at a time. The DetailsView renders a record in a two-column table, with one row for each field. The FormView is more flexible and uses templates to allow you to define a custom layout for the record. Both controls have built-in modes for displaying, editing, and inserting records.
For more lightweight or custom layouts, the Repeater and ListView controls were the ideal choice. The Repeater is a simple, template-driven control that gives the developer complete control over the generated HTML. It simply iterates over the data source and renders a template for each item. The ListView is a more modern control that combines the templating flexibility of the Repeater with the built-in features (like paging and editing) of the GridView.
A key challenge in web development is maintaining a consistent look and feel across all the pages of a site. In the era of the 70-564 Exam, ASP.NET Web Forms solved this problem with a feature called Master Pages. A Master Page is a template that defines the common layout and user interface elements for a set of pages in your application. It typically includes elements like the site header, footer, navigation menu, and overall page structure.
A Master Page (.master file) is similar to a standard ASP.NET page, but it contains one or more ContentPlaceHolder controls. These placeholders define the regions of the page where content pages can insert their specific content. A developer could create a single Master Page to define the site's branding and navigation, and then all other pages in the site could be based on this template.
Content pages (.aspx files) are then linked to a Master Page. A content page contains Content controls that correspond to the ContentPlaceHolder controls in the Master Page. At runtime, the ASP.NET engine merges the Master Page and the content page to produce the final, rendered HTML page. This model allowed for a powerful separation of layout from content.
This approach greatly simplified site maintenance. If you needed to change the company logo or add a new item to the navigation menu, you only had to modify the Master Page. The change would then be automatically reflected on every single page that used that master. Master Pages can also be nested, allowing for more complex and hierarchical layout structures. A solid understanding of this feature was essential for building professional, maintainable web applications.
While Master Pages provide reusability at the page level, User Controls provide reusability for smaller, component-level pieces of a user interface. A User Control (.ascx file) is a way to group a collection of markup and controls into a single, self-contained, reusable unit. This was a core concept for building modular applications and a key topic in the 70-564 Exam.
For example, if you had a complex search form with several text boxes, dropdown lists, and a search button that appeared on multiple pages of your site, you could encapsulate it into a SearchForm user control. You would create the UI and the associated code-behind logic once in the .ascx file. Then, you could easily place this user control on any ASP.NET page, just like any other server control.
User Controls have their own lifecycle, similar to a page, and they can expose public properties and methods. This allows the host page to interact with the user control. For example, the SearchForm user control could expose a SearchTerm property that the host page could read after a search was performed. This enables communication between the page and the components it contains.
Using User Controls promoted better code organization and reusability. It allowed developers to break down a complex UI into smaller, more manageable pieces. This made the application easier to develop, test, and maintain. Any piece of UI that was repeated or that represented a distinct, logical component was a good candidate for being turned into a User Control.
By the time of the .NET 3.5 framework, web development was rapidly moving towards more interactive and responsive user experiences, a trend known as Web 2.0. To compete with other popular frameworks, Microsoft introduced the ASP.NET AJAX Extensions. This was a framework that integrated both client-side and server-side components to make it easier to build rich, interactive web applications. Understanding this technology was a major part of the 70-564 Exam.
The core idea behind AJAX (Asynchronous JavaScript and XML) is to allow a web page to make requests to the server in the background, without requiring a full page refresh or postback. The page can send data to and retrieve data from the server asynchronously and then update a portion of the page's content using client-side JavaScript. This results in a much smoother and more responsive user experience, closer to that of a desktop application.
The ASP.NET AJAX Extensions were designed to make it easy for Web Forms developers to add this functionality to their applications with minimal effort. The framework provided a set of server controls and a client-side JavaScript library that worked together to abstract away the complexities of making asynchronous requests and manipulating the page's Document Object Model (DOM).
The framework was divided into several key components. On the server side, it included controls like the ScriptManager and the UpdatePanel, which provided the easiest way to enable partial-page updates. It also included a framework for creating and consuming web services from client-side script. On the client side, it provided the Microsoft AJAX Library, a JavaScript library that extended JavaScript with features like object-oriented programming constructs and a simplified API for making network requests.
The UpdatePanel was the flagship control of the ASP.NET AJAX Extensions and the simplest way to add AJAX functionality to an existing Web Forms page. A developer preparing for the 70-564 Exam needed to be an expert in its use. The UpdatePanel is a container control. Any controls placed inside an UpdatePanel participate in what are called partial-page updates.
The magic of the UpdatePanel is that when a postback is initiated by a control inside it (like a button click), it intercepts the postback and turns it into an asynchronous background request. The server still processes the request through the normal page lifecycle, but instead of re-rendering the entire page, it only renders the content of the UpdatePanel that initiated the postback. This partial HTML is then sent back to the browser, and a client-side script swaps out the old content with the new content.
This process is almost entirely transparent to the developer. You could take an existing Web Forms page, wrap a portion of it in an UpdatePanel, and it would instantly become more responsive, without requiring any changes to the code-behind event handlers. This made it incredibly easy to "AJAX-enable" legacy applications.
Of course, this simplicity came with its own set of trade-offs. The UpdatePanel still went through the entire page lifecycle on the server, and it still used ViewState to maintain the state of the controls. The amount of data sent back and forth could still be quite large. While it was a powerful tool for rapid development, a skilled developer also needed to know when to use more advanced, lower-level AJAX techniques for better performance and control.
For scenarios that required more control than the UpdatePanel could offer, the 70-564 Exam required knowledge of how to call server-side logic directly from client-side JavaScript. The ASP.NET AJAX Extensions provided a framework for exposing server-side methods to the client. You could take a method in your code-behind, or in a separate ASMX web service, and mark it with a special attribute.
The ScriptManager control on the page would then automatically generate a client-side JavaScript proxy for that method. This meant that a developer could call the server-side C# method from their JavaScript code as if it were a local JavaScript function. The framework would handle the details of creating and sending the asynchronous request to the server, deserializing the request, invoking the server method, and serializing the return value back to the client.
This approach provided much more fine-grained control. Instead of sending the entire ViewState and rendering partial HTML, you could make a lightweight call to a server method that returned just the data you needed, perhaps in JSON format. The client-side JavaScript would then be responsible for using that data to update the UI. This was a more efficient and flexible approach, though it required more JavaScript programming.
This capability was a key enabler for building more sophisticated and client-centric applications. It allowed developers to create highly interactive components, like auto-complete text boxes or live data grids, that communicated with the server in a very efficient manner. It represented a middle ground between the high-level abstraction of the UpdatePanel and writing raw AJAX calls from scratch.
Creating a visually appealing and consistently styled application is crucial for user experience. The 70-564 Exam covered the features that ASP.NET provided for managing the look and feel of a web application. The primary mechanism for this was CSS (Cascading Style Sheets), which is the standard web technology for defining the presentation of HTML documents. ASP.NET server controls render as HTML, and they have properties like CssClass that allow you to assign CSS classes to them.
Beyond basic CSS, ASP.NET provided a powerful feature called Themes and Skins. A theme is a collection of resources that can be applied to a web application to define its overall appearance. A theme can include CSS files, images, and special files called skin files (.skin). A skin file allows you to define default property settings for ASP.NET server controls.
For example, you could create a skin file that specifies that all <asp:Button> controls in your application should have a specific background color and font size. You could then apply this theme to your entire application by setting an attribute in the web.config file. This would ensure that all buttons across your site had a consistent look without you having to set the properties on each individual button.
You could also create multiple themes for a single application, allowing the user to switch between them. For instance, you could have a "light" theme and a "dark" theme. The theming system provided a powerful way to separate the visual design of an application from its functional logic, making it easier for designers and developers to collaborate and for the application's look and feel to be updated independently.
Application security is a paramount concern for any web developer, and the 70-564 Exam placed a strong emphasis on the security features built into the ASP.NET framework. At the core of this was the ASP.NET Membership provider model. This was a flexible and extensible system designed to handle the common tasks of user authentication, which is the process of verifying a user's identity.
The Membership framework provided a ready-made solution for storing and validating user credentials (usernames and passwords). It included a standard database schema for storing user information and a robust API for creating new users, validating logins, managing passwords, and handling password recovery. By using the Membership provider, developers could implement a secure login system without having to write all the underlying database code and password hashing logic from scratch.
ASP.NET provided a default SqlMembershipProvider that worked with the standard schema in a SQL Server database. However, the provider model meant that you could create your own custom provider to store user credentials in a different data store, such as an XML file, a different database system, or an existing legacy user table. This made the framework highly adaptable to different application requirements.
To complement the Membership API, ASP.NET also included a set of pre-built login controls, such as the Login, CreateUserWizard, and PasswordRecovery controls. These were server controls that provided a complete user interface for all the common authentication tasks. A developer could simply drag these controls onto a page to create a fully functional and secure login system with very minimal effort.
Once a user has been authenticated, the next step in security is authorization. This is the process of determining whether an authenticated user has permission to access a specific resource or perform a particular action. The 70-564 Exam required a thorough understanding of the ASP.NET Role Management system, which provided a framework for implementing role-based authorization.
The Role Management provider, like the Membership provider, was an extensible system for managing the assignment of users to roles. A role is simply a named group, such as "Administrators," "Managers," or "Editors." The framework provided an API for creating and deleting roles, adding users to and removing users from roles, and checking if a user belongs to a specific role.
The default SqlRoleProvider stored this information in a standard set of tables in a SQL Server database, which were designed to work with the Membership database. By assigning users to roles, you could manage permissions for groups of users instead of having to manage them for each individual user, which greatly simplifies security administration.
Authorization rules could then be defined declaratively in the web.config file. Using the <authorization> tag, you could specify rules for a specific page or a directory of pages. You could allow or deny access based on the user's role. For example, you could configure a /Admin directory to deny access to all anonymous users and all authenticated users, but allow access to users in the "Administrators" role. This provided a powerful and centralized way to manage access control for your application.
Maintaining the health and stability of a running web application is a critical operational task. The 70-564 Exam covered the ASP.NET Health Monitoring system, a sophisticated and often overlooked feature of the framework. This system provides a flexible and extensible way to log important application events, such as unhandled exceptions, security failures, application startup and shutdown events, and performance metrics.
Health Monitoring works by defining a set of event providers, which are sources of information, and event consumers (also called listeners), which are destinations for that information. The framework included a rich set of built-in providers that could capture a wide range of events, from audit events like successful and failed logins to web request events that provided detailed information about each request processed by the application.
It also included a set of built-in consumers. For example, the SqlWebEventProvider could be configured to log event information to a SQL Server database. The EventLogWebEventProvider logged events to the Windows Event Log, and the MailWebEventProvider could be configured to send an email notification when a specific type of event occurred, such as a critical error.
All of this was configured within the web.config file in the <healthMonitoring> section. A developer or administrator could create rules to map specific events from specific providers to one or more consumers. This provided a powerful, declarative way to set up a comprehensive logging and alerting system for an application without writing any custom code. It was an essential tool for diagnosing problems and monitoring the health of a production application.
Web application performance is a critical factor for user satisfaction and scalability. One of the most effective ways to improve performance is through caching. Caching is the process of storing frequently accessed data in a temporary, fast-access storage location to avoid the cost of retrieving it from its original, slower source. The 70-564 Exam required developers to be proficient with the various caching mechanisms provided by ASP.NET.
The most common type of caching is Application Caching. ASP.NET provides an in-memory cache, accessible through the HttpRuntime.Cache object, which can be used to store any kind of data, such as DataSets, business objects, or lists of product data. This is a simple key-value store. Before making an expensive database call, a developer could first check if the data already exists in the cache. If it does, it can be retrieved instantly, saving a round trip to the database.
The ASP.NET Cache object is more than just a simple dictionary. It provides powerful features like time-based expirations (where an item is automatically removed from the cache after a certain amount of time) and dependency-based expirations. For example, you could make a cached item dependent on a file on disk or on another cached item. If the dependency changes, the item is automatically evicted from the cache.
Another powerful caching feature is Output Caching. This allows you to cache the entire HTML output of a page or a user control. When a page is output cached, the first time it is requested, it is processed normally. The resulting HTML is then stored in memory. For subsequent requests, instead of re-executing the entire page lifecycle, ASP.NET simply serves the cached HTML directly. This can provide a massive performance boost for pages that display data that does not change frequently.
Even in the best-written applications, errors can and do occur. How an application handles these errors is a mark of its quality. A key topic for the 70-564 Exam was how to implement robust error handling and effectively debug an ASP.NET application. When an unhandled exception occurs in an application, ASP.NET displays a detailed error page, often called the "Yellow Screen of Death." While this is very useful for developers during development, it should never be shown to end users.
To provide a better user experience, ASP.NET allows you to configure custom error pages. In the web.config file, using the <customErrors> tag, you can specify a default redirect page that users will be sent to in the case of any unhandled error. You can also specify different pages for specific HTTP error codes, such as a custom "Page Not Found" page for 404 errors.
For more granular control, you can handle the Error event in the Global.asax file. The Global.asax file contains code for application-level events. The Application_Error event handler is a centralized place where you can catch all unhandled exceptions that occur anywhere in your application. In this event handler, you can log the detailed exception information for later analysis and then redirect the user to a friendly error page.
For debugging during development, Visual Studio provides a powerful integrated debugger. You can set breakpoints in your code-behind files, and when the application is run in debug mode, execution will pause when a breakpoint is hit. You can then inspect the values of variables, step through your code line by line, and analyze the call stack. The debugger is an indispensable tool for diagnosing and fixing bugs in your application logic.
Web applications often need to support users from different cultures and who speak different languages. The process of designing an application to be adaptable to different cultures is called globalization, and the process of translating the application's UI for a specific culture is called localization. The 70-564 Exam required developers to understand the features that ASP.NET provides to support this.
The .NET Framework has a rich set of classes for handling cultural differences, such as the formatting of dates, times, numbers, and currencies. The culture of the current user is typically determined from the language settings of their web browser. ASP.NET can automatically detect this and set the appropriate culture for the current request. This ensures that data is displayed to the user in a format they are familiar with.
For localizing the text in a user interface, ASP.NET uses resource files (.resx). A resource file is an XML-based file that contains a set of key-value pairs. You can create a base resource file for your default language (e.g., WebResources.resx) and then create separate resource files for each additional language you want to support, using a specific naming convention (e.g., WebResources.fr-FR.resx for French).
You can then use special declarative expressions in your page markup to bind the properties of your server controls to the keys in these resource files. At runtime, ASP.NET will automatically select the correct resource file based on the user's current culture and substitute the appropriate translated text. This provides a clean separation of the UI's text from its design and logic, making the translation process much more manageable.
A significant part of the "PRO" level 70-564 Exam was not just about writing code but about designing a well-structured and maintainable application. A common best practice for structuring an application is to use a multi-layered architecture. A typical three-layer architecture consists of a Presentation Layer, a Business Logic Layer (BLL), and a Data Access Layer (DAL). This separation of concerns is a fundamental principle of good software engineering.
The Presentation Layer is the user interface of the application. In the context of the 70-564 Exam, this would be the ASP.NET Web Forms pages (.aspx files), user controls, and master pages. The sole responsibility of this layer is to display data to the user and to collect user input. It should contain minimal business logic.
The Business Logic Layer is the core of the application. It contains the business rules, logic, and workflows that define how the application operates. The BLL coordinates the application's tasks, processes data, and makes logical decisions. It acts as an intermediary between the Presentation Layer and the Data Access Layer. The Presentation Layer calls methods on the BLL to perform actions and retrieve data.
The Data Access Layer is responsible for all communication with the database. It contains the code that performs the actual SELECT, INSERT, UPDATE, and DELETE operations. This could be done using ADO.NET, LINQ to SQL, or the Entity Framework. By encapsulating all data access in this layer, you isolate the rest of the application from the specifics of the database, making it easier to change the database system or data access technology in the future.
In the era of the 70-564 Exam, Service-Oriented Architecture (SOA) was a major industry trend. ASP.NET provided a straightforward way to create and consume web services using the ASMX technology. An ASMX web service is a component that exposes application logic over standard web protocols, typically SOAP (Simple Object Access Protocol) over HTTP. This allows different applications, potentially running on different platforms, to communicate with each other.
Creating an ASMX web service was very simple. A developer would create an .asmx file and a code-behind class that inherited from System.Web.Services.WebService. Public methods in this class that were decorated with the [WebMethod] attribute would be automatically exposed as operations of the web service. These methods could accept complex types as parameters and return complex types as results.
The .NET Framework would automatically handle the serialization of these objects into XML for transmission and the deserialization of the incoming SOAP messages. It also automatically generated a WSDL (Web Services Description Language) document for the service. The WSDL is a machine-readable XML document that describes the service's operations, parameters, and data types.
Consuming a web service in a .NET application was also made easy by Visual Studio. A developer could simply "add a web reference" and point it to the URL of the WSDL. Visual Studio would then read the WSDL and generate a client-side proxy class. The developer could then instantiate this proxy class and call the web service's methods as if they were local methods, with the proxy handling all the underlying communication details.
While ASMX web services were simple to use, they were limited to SOAP over HTTP. To address the need for more flexible and powerful service-oriented applications, Microsoft introduced Windows Communication Foundation (WCF) as part of .NET 3.0. WCF was a unified programming model for building services that could communicate over a variety of protocols and transport mechanisms. A foundational knowledge of WCF was an important part of the 70-564 Exam.
A core concept in WCF is the "ABC" of a service: Address, Binding, and Contract. The Address is the URL where the service is located. The Binding specifies how the service communicates. WCF provided a rich set of built-in bindings for different scenarios, such as a BasicHttpBinding for interoperability with ASMX services, a WSHttpBinding for more advanced and secure SOAP communication, and a NetTcpBinding for high-performance communication between .NET applications.
The Contract is the most important part. It is an interface that defines the operations that the service exposes. The service contract specifies what the service can do. A service is then created by writing a class that implements this contract interface. This contract-first approach promoted a well-defined and loosely coupled architecture.
WCF was a significant step up in complexity from ASMX, but it provided unparalleled flexibility. A single WCF service could be configured to expose multiple endpoints, each with a different address and binding. This meant that the same service could be made available to legacy ASMX clients over basic HTTP, to modern .NET clients over high-performance TCP, and to Java clients over a standardized WS-* security protocol, all without changing the service's code.
Another service-oriented technology introduced in .NET 3.5 and relevant to the 70-564 Exam was ADO.NET Data Services, later known as WCF Data Services. This was a framework for creating and consuming data-centric services based on the OData (Open Data Protocol) standard. OData is a web protocol for querying and updating data that exposes data as resources that can be identified by URIs.
The goal of ADO.NET Data Services was to make it incredibly easy to expose a data model, such as an Entity Framework model, as a RESTful web service. A RESTful service is one that follows the principles of Representational State Transfer, using the standard HTTP verbs (GET, POST, PUT, DELETE) to operate on resources.
With just a few lines of code, a developer could create a data service that wrapped their data model. This service would automatically expose all the entities in the model as resources. Clients could then query these resources using a standardized URI query syntax. For example, a client could retrieve a specific customer, find all orders over a certain amount, or navigate from a customer to their related orders, all by constructing the appropriate URL.
This provided a very flexible and standardized way to expose data over the web. It was particularly useful for building rich internet applications (RIAs) with technologies like Silverlight or JavaScript client libraries, as it provided a simple, HTTP-based way to query and update data without the complexity of SOAP. It was an early implementation of the modern web API paradigm.
Go to testing centre with ease on our mind when you use Microsoft 70-564 vce exam dumps, practice test questions and answers. Microsoft 70-564 PRO: Designing and Developing ASP.NET Applications certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Microsoft 70-564 exam dumps & practice test questions and answers vce from ExamCollection.
Top Microsoft Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.