100% Real Microsoft MCSA 70-768 Exam Questions & Answers, Accurate & Verified By IT Experts
Instant Download, Free Fast Updates, 99.6% Pass Rate
52 Questions & Answers
Last Update: Aug 30, 2025
€69.99
Microsoft MCSA 70-768 Practice Test Questions in VCE Format
File | Votes | Size | Date |
---|---|---|---|
File Microsoft.Dumps.70-768.v2017-01-12.by.Serhio.60q.vce |
Votes 15 |
Size 479.11 KB |
Date Jan 17, 2017 |
Microsoft MCSA 70-768 Practice Test Questions, Exam Dumps
Microsoft 70-768 (Developing SQL Data Models) exam dumps vce, practice test questions, study guide & video training course to study and pass quickly and easily. Microsoft 70-768 Developing SQL Data Models exam dumps & practice test questions and answers. You need avanset vce exam simulator in order to study the Microsoft MCSA 70-768 certification exam dumps & Microsoft MCSA 70-768 practice test questions in vce format.
The Microsoft 70-768 Exam, "Developing SQL Data Models," was a crucial step for professionals seeking the MCSA: SQL 2016 BI Development certification. This exam was specifically designed for Business Intelligence (BI) developers, data modelers, and other data professionals responsible for creating robust, high-performance data models. Unlike exams focused on database administration or warehousing, the 70-768 Exam centered on the design and implementation of semantic models using SQL Server Analysis Services (SSAS). These models provide a business-friendly layer over complex data, enabling powerful analytics and reporting.
Passing the 70-768 Exam demonstrated a candidate's expertise in the two primary modeling paradigms within SSAS: the modern, in-memory Tabular model and the classic, OLAP-based Multidimensional model. It certified that an individual could not only build these models but also make the critical architectural decision about which model to use for a given business scenario. The curriculum was extensive, covering everything from initial design and development to implementing calculations, ensuring data quality, and securing the final model.
The exam required deep, practical knowledge of the development environment, SQL Server Data Tools (SSDT), and the specialized languages associated with each model. For Tabular models, this meant a strong command of Data Analysis Expressions (DAX). For Multidimensional models, a solid understanding of Multidimensional Expressions (MDX) was required. The test was a rigorous validation of the skills needed to transform raw data into a valuable analytical asset for an organization.
For a data professional's career, success in the 70-768 Exam signified a high level of competence in the Microsoft BI stack. It was a clear indicator to employers that the certified individual possessed the specialized skills to design and build the sophisticated data models that power enterprise-level reporting, dashboards, and self-service analytics solutions.
Before diving into the specifics of the 70-768 Exam, it is essential to understand the purpose of a semantic data model. A typical data warehouse contains dozens or even hundreds of tables with complex relationships and cryptic column names. While this structure is efficient for storing data, it is not user-friendly for business analysts or report authors. They should not need to be database experts to analyze data. This is the problem that a semantic model, like those built with SSAS, is designed to solve.
A semantic model acts as an intermediary layer between the data warehouse and the end-user reporting tools, such as Excel, Power BI, or Reporting Services. This model reorganizes and re-presents the data in a way that reflects business concepts, not physical table structures. It translates technical table and column names into familiar business terms, such as "Product," "Customer," and "Total Sales."
The model also pre-defines the relationships between these business entities, pre-calculates key business metrics (known as measures), and organizes attributes into logical hierarchies for drill-down analysis (e.g., Year, Quarter, Month). This provides a single, centralized version of the truth for all business reporting and analysis. It simplifies the user experience, improves query performance, and ensures that everyone in the organization is working from the same set of definitions and calculations.
The entire focus of the 70-768 Exam is on the skills required to design, build, and manage this critical semantic layer. A certified professional is an expert in creating these models that empower business users to explore data and make informed decisions.
A central challenge for any SSAS developer, and a core concept tested in the 70-768 Exam, is choosing between the two available modeling modes: Tabular and Multidimensional. These are two fundamentally different approaches to creating a semantic model, each with its own architecture, query language, and ideal use cases. Making the right choice at the beginning of a project is critical for its success.
The Multidimensional model is the classic, traditional OLAP (Online Analytical Processing) model. It organizes data into "cubes," which are composed of dimensions (like Time, Product, Geography) and measures (like Sales, Quantity). Data is pre-aggregated and stored in a highly optimized, multidimensional structure on disk. This model is extremely powerful for complex analytical queries and can handle massive data volumes. Its query language is MDX (Multidimensional Expressions).
The Tabular model, which was newer at the time of SQL Server 2016, is a modern, in-memory approach. It organizes data as a collection of related tables, much like a relational database. It is based on a powerful in-memory, columnar database engine called the VertiPaq engine, which provides incredibly fast query performance through compression and by keeping the entire model in RAM. The query and calculation language for Tabular models is DAX (Data Analysis Expressions).
The 70-768 Exam requires you to know the pros and cons of each. Tabular models are generally seen as easier and faster to develop and are ideal for self-service and team-level BI. Multidimensional models are often preferred for large-scale, complex corporate BI solutions with very rigid reporting requirements.
Regardless of whether you choose to build a Tabular or a Multidimensional model, both are based on the principles of dimensional modeling. A solid understanding of these principles is a prerequisite for the 70-768 Exam. Dimensional modeling is a data modeling technique that is optimized for analytical queries and reporting. The most common structure used in dimensional modeling is the star schema.
A star schema is a simple, intuitive design that consists of two types of tables: a single "fact table" and one or more "dimension tables." The fact table is located at the center of the star and contains the quantitative, numerical data that you want to analyze. These are the "measures" of the business, such as sales amount, units sold, or cost of goods sold. A fact table can often contain millions or even billions of rows.
The dimension tables surround the fact table, forming the points of the star. Each dimension table describes a specific business entity and contains the descriptive attributes that you will use to slice and dice the data in the fact table. Common dimension tables include a Time dimension, a Product dimension, a Customer dimension, and a Geography dimension. These tables are typically much smaller than the fact table.
The fact table is linked to the dimension tables using foreign key relationships. This simple, denormalized structure is easy for business users to understand and is highly optimized for the types of queries used in business intelligence. The 70-768 Exam assumes that you have a firm grasp of the star schema and the distinction between facts and dimensions.
The primary development environment for creating and managing Analysis Services projects is SQL Server Data Tools (SSDT). Proficiency in using SSDT is a core requirement for any candidate taking the 70-768 Exam. SSDT is an extension for Microsoft Visual Studio that provides a dedicated set of tools, templates, and designers specifically for building BI solutions, including SSAS models, Integration Services (SSIS) packages, and Reporting Services (SSRS) reports.
When you want to create a new data model, you would start by opening SSDT and creating a new Analysis Services project. The project templates will prompt you to choose whether you want to create a Multidimensional project or a Tabular project. This choice determines which set of designers and tools will be available to you within the project.
For Tabular projects, SSDT provides a grid-based designer that looks very similar to Excel, allowing you to view your tables, create relationships graphically, and write DAX formulas for calculated columns and measures. For Multidimensional projects, the environment provides a series of specialized designers for creating data source views, designing dimensions, and building the cube structure.
SSDT provides a complete, integrated development experience. You can design your model, process it with data from your sources, and browse and query the model, all from within the same tool. Once the model is complete, you can use SSDT to deploy it to an Analysis Services server. The 70-768 Exam will test your practical knowledge of navigating and using the SSDT environment for both model types.
The first step in building any data model is to connect to the underlying source data. An understanding of how to manage these connections is a key part of the 70-768 Exam curriculum. In an Analysis Services project, this is done by creating a "Data Source" object. A data source contains the connection string and credential information needed to connect to a source system, which is typically a relational data warehouse running on SQL Server.
Once the data source is defined, the next step depends on the type of model you are building. In a Multidimensional project, you will create a "Data Source View" (DSV). The DSV is a powerful and essential abstraction layer that sits between the physical data source and the cube. It allows the BI developer to select the specific tables and views from the data source that are relevant to the model.
Within the DSV, the developer can create a more user-friendly and logical representation of the data. They can rename tables and columns to be more business-friendly, create relationships between tables if they are not defined in the source, and even create named calculations that act like new columns on a table. The cube is then built on top of this logical DSV, not directly on the physical data source.
In a Tabular project, the concept of a DSV does not exist in the same way. The developer typically connects directly to the source tables and imports them into the Tabular model. However, the same principles of selecting, renaming, and creating relationships apply, they are just performed within the main model designer itself.
As you begin your journey to prepare for the 70-768 Exam, the most effective strategy is to start by mastering the foundational concepts. This is not an exam that can be passed by memorizing syntax. It is a test of your ability to make sound design decisions, and those decisions are all based on a core set of principles. Your initial focus should be on building a rock-solid understanding of these fundamentals.
The first and most important of these is the distinction between the Tabular and Multidimensional modeling modes. Before you write a single line of DAX or MDX, you must be able to clearly articulate the architectural differences between the two, their respective strengths and weaknesses, and the business scenarios in which you would choose one over the other. This is the most critical design decision you will make.
Next, you must become an expert in the principles of dimensional modeling. A deep and intuitive understanding of the star schema, the difference between fact and dimension tables, and the concept of granularity is non-negotiable. This knowledge is the common language that underlies both Tabular and Multidimensional models. Practice designing simple star schemas on paper for different business scenarios.
Finally, get comfortable with the development environment, SQL Server Data Tools (SSDT). Create both a sample Tabular project and a sample Multidimensional project. Explore the different designers and windows for each. By focusing on these three pillars first—the two modeling modes, dimensional modeling principles, and the SSDT environment—you will build the necessary foundation to successfully tackle the more advanced topics of the 70-768 Exam.
The Tabular model is a central focus of the 70-768 Exam, reflecting its importance in modern Business Intelligence. This model is based on an in-memory, columnar database engine that provides exceptional performance for analytical queries. A candidate for the exam must be proficient in the entire process of designing and developing a Tabular model from start to finish. This involves importing data, defining relationships, and enriching the model with business logic using the DAX language.
The development process for a Tabular model is designed to be rapid and intuitive, making it a favorite among developers and analysts. The entire model is built within a single, integrated designer in SQL Server Data Tools (SSDT). This environment provides a live connection to the data, allowing the developer to see the results of their changes in real-time.
A typical Tabular model development workflow begins with importing data from one or more source systems. The developer then creates relationships between the imported tables to form a coherent data model, usually a star schema. After the basic structure is in place, the model is enriched with calculated columns, measures, hierarchies, and Key Performance Indicators (KPIs) to meet the business's analytical requirements.
Finally, the developer will implement the security model, typically by defining roles and applying row-level security rules. The 70-768 Exam will test your practical knowledge of each of these steps, requiring you to understand not just the "how" but also the "why" behind each design decision in the process of building a powerful and user-friendly Tabular model.
The first step in building any Tabular model is to bring data into it. An understanding of the data import process is a key part of the 70-768 Exam curriculum. The primary method for this is to import data from a relational data source, such as a SQL Server data warehouse. When you import data, Analysis Services reads the data from the source, compresses it, and stores it in its in-memory columnar database engine, known as the VertiPaq engine.
The process is managed through a wizard in SQL Server Data Tools (SSDT). The developer creates a connection to the source database and can then select the specific tables and views they want to import into their model. The wizard allows the developer to preview the data and to filter or rename the columns before the import begins. This initial data shaping is an important step in creating a clean and efficient model.
Once the data is imported, each source table becomes a table in the Tabular model. The designer in SSDT provides a grid view of the data in each table, which looks and feels very similar to a worksheet in Excel. This familiar interface makes it easy to explore the imported data and to begin the process of enriching the model.
It is also possible to add data to the model by copying and pasting it from another source, such as an Excel file. While not as common for enterprise models, this capability is useful for quickly adding small lookup tables or supplementary data. The 70-768 Exam will expect you to be proficient in the standard data import process from a SQL Server source.
After the tables have been imported into the model, the next critical step is to define the relationships between them. This is how you transform a collection of separate tables into a single, cohesive data model. A deep understanding of how to create and manage relationships is a fundamental skill for the 70-768 Exam. Relationships are what allow users to slice and dice the measures from a fact table by the attributes from the dimension tables.
In a Tabular model, relationships are created graphically in the diagram view of the model designer in SSDT. A developer can simply drag a column from one table and drop it onto the corresponding key column in another table to create the relationship. For a star schema, this typically involves creating a one-to-many relationship from each dimension table to the central fact table.
In addition to relationships, a key feature for improving the user experience is the creation of hierarchies. A hierarchy is a logical, drill-down path that is defined on the attributes of a dimension. For example, in a Product dimension, you might create a hierarchy that goes from Product Category to Product Subcategory to the individual Product Name.
When a user in a reporting tool like Excel or Power BI sees this hierarchy, they can easily drill down from a high-level summary to more detailed information. This is much more intuitive than having to manually drag and drop the individual attribute columns. The 70-768 Exam requires you to know how to create both relationships and hierarchies to build a user-friendly and effective data model.
Once the basic structure of the model is in place, the next step is to enrich it with business logic. One of the primary ways to do this in a Tabular model is by creating calculated columns. A calculated column is a new column that you add to a table in your model, but its values are derived from a DAX formula rather than being imported from the data source. An understanding of calculated columns is a key DAX concept for the 70-768 Exam.
A calculated column is evaluated row by row for each row in the table, and its results are then stored in the model just like any other column. This process happens during the database processing, not at query time. Because the values are pre-calculated and stored, they can be used to filter or group data just like a regular column.
Calculated columns are useful for a variety of purposes. A common use case is to create a new attribute by combining or manipulating existing columns. For example, you could create a "Full Name" calculated column by concatenating the "First Name" and "Last Name" columns. Another common use is to perform a simple, row-level calculation, such as creating a "Total Price" column with the formula =[Quantity] * [Unit Price].
While powerful, it is important to remember that calculated columns consume memory, as their results are stored in the model. Therefore, they should be used judiciously. The 70-768 Exam will expect you to know the syntax for creating basic calculated columns and to understand the scenarios in which they are the appropriate tool to use.
While calculated columns are evaluated row by row, "measures" are calculations that are evaluated at query time over an aggregation of many rows. Measures are the heart of any analytical model, as they represent the key business metrics that users want to analyze. A complete mastery of the concept of measures is the most critical part of the Tabular modeling section of the 70-768 Exam.
There are two types of measures: implicit and explicit. An implicit measure is one that is created automatically by a client tool like Excel or Power BI when a user drags a numeric column into the values area of a report. The client tool will automatically apply a simple aggregation like SUM or COUNT. While convenient, implicit measures are not a recommended best practice.
An explicit measure is one that is created by the data modeler directly in the model using a DAX formula. This is the preferred approach as it ensures that all business calculations are defined in one central location and are consistent across all reports. An explicit measure is defined in the calculation area of the model designer in SSDT. A simple explicit measure might look like Total Sales := SUM(Sales[SalesAmount]).
Explicit measures can be much more sophisticated than simple sums or averages. They can contain complex business logic that responds to the user's selections in the report. The ability to write and debug these DAX measures is the core skill of a Tabular model developer.
A very common category of business analysis is time intelligence, which involves comparing performance over different time periods. The 70-768 Exam requires a developer to be proficient in implementing time intelligence calculations in a Tabular model using DAX. This allows users to easily analyze metrics like year-to-date sales, growth versus the previous year, or performance compared to the same period in the last year.
The foundation of all time intelligence in a Tabular model is a well-structured "Date" or "Calendar" dimension table. This is a special table that contains a continuous sequence of dates and various attributes for each date, such as the year, quarter, month, and day of the week. To enable the special DAX time intelligence functions, you must explicitly mark your date table as the official date table in the model properties.
Once the date table is in place, you can use the rich library of DAX time intelligence functions to create powerful measures. For example, the TOTALYTD function calculates the year-to-date total for a measure. The SAMEPERIODLASTYEAR function returns a set of dates from the previous year that corresponds to the current selection, which can be used within a CALCULATE function to compare performance.
Functions like DATEADD and PARALLELPERIOD provide even more flexibility for performing time-based comparisons. The ability to correctly set up a date table and use these DAX time intelligence functions is a critical skill for building any meaningful business analytics model and is a key topic on the 70-768 Exam.
After the model has been built and enriched with calculations, the final step is to implement security to ensure that users can only see the data they are authorized to see. The 70-768 Exam requires you to understand how to implement security in a Tabular model. The primary mechanism for this is the creation of "Roles." A role is a named object in the model that contains a set of users or groups from Windows Active Directory.
Once a role is created, you can grant it permissions, such as "Read" access to the model. However, the most powerful security feature is "Row-Level Security" (RLS). RLS allows you to define a filter on a table that is specific to a particular role. This is done by writing a DAX formula in the "Row Filters" pane for that role and table.
For example, you could create a role called "US Sales Managers." On the "Sales Territory" table, you could apply a DAX row filter for this role with the formula =[Country] = "United States". When a user who is a member of this role connects to the model, they will only be able to see data for the United States. The filter is applied automatically and transparently for all queries.
This DAX-based row-level security is a powerful and flexible way to secure your data. You can create complex rules based on the user's identity, which can even be looked up from another table in the model. A solid understanding of how to create roles and apply these DAX row filters is an essential security skill for the 70-768 Exam.
Data Analysis Expressions (DAX) is the formula and query language used in SSAS Tabular models, as well as in Power BI and Excel Power Pivot. For anyone taking the 70-768 Exam, achieving a high level of proficiency in DAX is not optional; it is the most critical skill for a Tabular model developer. DAX is what transforms a simple collection of tables into an intelligent and powerful analytical model.
DAX serves two primary purposes. First, it is used as a modeling language. As we have seen, DAX formulas are used to create the calculated columns and measures that contain the business logic of the model. This is where you define your Key Performance Indicators (KPIs) and other essential calculations.
Second, DAX can also be used as a query language. Client tools like Excel and Power BI automatically generate DAX queries in the background to retrieve data from the Tabular model based on the user's interactions with a report or a PivotTable. While you may not write many full queries by hand, understanding how DAX queries work is essential for optimizing your model and for debugging your measures.
The DAX language is designed to be simple to read, with a syntax that is similar to Excel formulas. However, under the surface, it is an incredibly powerful and nuanced language. A deep understanding of its core concepts, particularly the concept of "evaluation context," is what separates a novice from an expert DAX author.
To truly master DAX, a candidate for the 70-768 Exam must understand the concept of "evaluation context." This is the most fundamental and often the most challenging concept in the language, but it is the key to understanding how any DAX formula is calculated. The evaluation context is the "environment" in which a DAX formula is evaluated. There are two types of context: Row Context and Filter Context.
Row Context exists when a formula is being evaluated on a row-by-row basis. The most common example of this is a calculated column. When you create a calculated column, its DAX formula is executed independently for each row in the table. Within this context, you can refer to the values of other columns in that same row without any special functions. The row context provides the concept of "the current row."
Filter Context is the set of filters that are applied to the data model before a formula is evaluated. This context is created by the user's interactions with a report. For example, if a user selects the year "2025" from a slicer and the country "USA" from a chart, the filter context for any measure will be "all data where the year is 2025 and the country is USA." All measures are evaluated within a filter context.
A single DAX formula can be evaluated in an environment that has both a row context and a filter context. Understanding how these two contexts interact is the key to writing advanced DAX, and it is a concept that the 70-768 Exam will test implicitly through scenario-based questions.
If there is one function that every candidate for the 70-768 Exam must master, it is the CALCULATE function. CALCULATE is often described as the most important and powerful function in the DAX language. Its primary purpose is to modify the filter context in which an expression is evaluated. It is the key to creating sophisticated and dynamic measures.
The CALCULATE function takes at least two arguments. The first argument is an expression that you want to evaluate, such as SUM(Sales[SalesAmount]). The subsequent arguments are a series of filters that you want to apply. These filters can either be simple boolean expressions, or they can be more complex table functions. The CALCULATE function will apply these new filters to the existing filter context before it evaluates the expression.
For example, the formula CALCULATE(SUM(Sales[SalesAmount]), Product[Color] = "Red") will calculate the sum of sales, but it will do so after adding a new filter to the context for red products. This allows you to create measures that calculate a value for a specific segment of the data, regardless of the user's current selections in the report.
CALCULATE can also be used with special filter modifier functions, like ALL, to remove existing filters from the context. This is the key to calculating things like the percentage of a total. CALCULATE is the workhorse of the DAX language, and a deep understanding of its ability to manipulate the filter context is essential.
DAX provides a rich library of functions for performing calculations, and the 70-768 Exam will expect you to be familiar with the most common ones. These can be broadly divided into two categories: simple aggregators and iterators (also known as "X-functions"). A simple aggregator function, like SUM, AVERAGE, or COUNT, takes a single column as an argument and performs a calculation on the values in that column within the current filter context.
For example, SUM(Sales[SalesAmount]) will sum up the values in the SalesAmount column for all the rows that are visible in the current filter context. These simple aggregators are efficient and are used for the most common types of measures.
Iterator functions, which typically end in an "X" (like SUMX, AVERAÄ EX, and COUNTX), are much more powerful and flexible. An iterator function takes a table as its first argument and an expression as its second argument. The function then iterates through the specified table, one row at a time, and evaluates the expression in the row context of that current row. Finally, after it has iterated through all the rows, it aggregates the results.
For example, SUMX(Sales, Sales[Quantity] * Sales[Unit Price]) will go through the Sales table row by row, calculate the total price for each row, and then sum up those individual row-level results. Iterators are essential for any calculation that needs to be performed at a row level before the final aggregation.
To effectively use the CALCULATE function, you must also be familiar with the various DAX filter functions. These are functions that return tables and are used as filter arguments within CALCULATE to manipulate the filter context. An understanding of these functions is a key part of the advanced DAX knowledge required for the 70-768 Exam.
The most important filter modifier is the ALL function. The ALL function can be used to remove the filters from a column or an entire table. For example, the measure CALCULATE(SUM(Sales[SalesAmount]), ALL(Product)) will calculate the sum of sales for all products, ignoring any filters that the user may have placed on the Product dimension in their report. This is the key to calculating percentages of a grand total.
The FILTER function is another powerful tool. It is an iterator that takes a table as its first argument and a boolean condition as its second. It returns a new table that contains only the rows from the original table for which the condition is true. This can be used to create very specific and complex filter conditions within a CALCULATE expression.
Other important filter functions include ALLEXCEPT, which removes the filters from all columns in a table except for the ones you specify, and RELATEDTABLE, which is used to traverse relationships in the data model. Mastering these filter functions will allow you to perform almost any analytical calculation imaginable.
As your DAX formulas become more complex, they can become difficult to read, debug, and maintain. To help with this, DAX includes the ability to use variables within a formula. The ability to use variables is a best practice that the 70-768 Exam will expect you to know. Variables are declared at the beginning of a formula using the VAR keyword, and the main expression is returned using the RETURN keyword.
Variables provide several key benefits. First, they dramatically improve the readability of your code. You can store the result of a complex, intermediate calculation in a well-named variable. This allows you to break down a complex formula into a series of logical, easy-to-understand steps. The main RETURN expression then becomes a simple combination of these intermediate variables.
Second, variables can improve the performance of your formula. If you need to use the result of a specific calculation multiple times within the same formula, you should store that result in a variable. The DAX engine will then calculate the value once and reuse it, which is much more efficient than having to recalculate the same complex expression multiple times.
The syntax is straightforward: VAR MyVariable = [Some Calculation] RETURN MyVariable * 2. You can declare multiple variables, one after another, before the final RETURN statement. Using variables is a hallmark of a professional DAX author and is an essential technique for writing clean and efficient code.
The 70-768 Exam will not just test your knowledge of individual DAX functions; it will test your ability to combine them to solve common business problems. There are several common analytical patterns that a data modeler is frequently asked to implement. One of the most common is calculating a "percentage of total."
To calculate a percentage of total, you need two values: the value for the current selection (the numerator) and the value for the grand total (the denominator). The numerator is typically a simple measure, like Total Sales = SUM(Sales[SalesAmount]). The denominator is a modified version of this measure where you remove the relevant filters. For example, to get the percentage of sales for a product relative to all products, the denominator would be CALCULATE([Total Sales], ALL(Product)). The final measure is then DIVIDE([Total Sales], [All Product Sales]).
Another common pattern is performing year-over-year (YoY) growth calculations. This typically involves using the SAMEPERIODLASTYEAR function to get the sales for the previous year and then using a formula like ([Current Year Sales] - [Last Year Sales]) / [Last Year Sales] to calculate the growth percentage.
Other common patterns include calculating moving averages, ranking items based on a measure, and performing semi-additive calculations (like inventory balances that should not be summed up over time). Being familiar with the standard DAX patterns for solving these common business scenarios is a key to success on the 70-768 Exam.
While the Tabular model represents the modern approach to BI at the time of SQL Server 2016, the Multidimensional model, or "cube," is the classic, powerful OLAP solution. The 70-768 Exam requires a candidate to be proficient in the design and development of these traditional cubes. A Multidimensional model is a highly structured and optimized database designed specifically for complex analytical queries and large-scale enterprise reporting.
The development process for a cube is more structured and less interactive than for a Tabular model. It is done within a dedicated Multidimensional project in SQL Server Data Tools (SSDT). The process involves a series of distinct steps, using a series of specialized designers to build the different components of the cube. The core components of a cube are its dimensions and its measures.
The workflow begins with creating a Data Source View (DSV), which is a logical representation of the underlying data warehouse tables. The developer then uses this DSV to design the "dimensions," which contain the descriptive attributes used for analysis. After the dimensions are built, the developer creates the "cube" itself, selecting the fact tables that contain the numerical "measures" and linking them to the previously created dimensions.
Finally, the cube can be enriched with calculations, Key Performance Indicators (KPIs), and other advanced features. While more complex to build than a Tabular model, a well-designed cube can provide exceptional performance and analytical power. The 70-768 Exam will test your knowledge of this entire, structured development process.
Dimensions are the heart of a Multidimensional model. They are the "by which" you analyze your data. For example, you analyze sales "by" time, "by" product, and "by" geography. The design of these dimensions is a critical skill for the 70-768 Exam. A dimension is built from a table in the Data Source View and is composed of one or more "attributes." An attribute is a column from the dimension table that can be used for slicing, dicing, and filtering the data.
Each dimension has a "key attribute," which is the attribute that uniquely identifies each member of the dimension (e.g., the ProductKey). Other attributes provide the descriptive information (e.g., ProductName, ProductColor, ProductCategory). A key part of dimension design is defining the "attribute relationships."
Attribute relationships define the one-to-many relationships between the attributes within a single dimension. For example, within a Product dimension, there is a one-to-many relationship between ProductCategory and ProductSubcategory. By defining these relationships correctly, you provide the Analysis Services engine with critical information that it can use to build more efficient aggregations and improve query performance.
If the attribute relationships are not defined correctly, the engine may perform unnecessary calculations, leading to slow query performance. The 70-768 Exam will expect you to understand the importance of attribute relationships and how to configure them in the dimension designer to create a well-performing and logically sound dimension.
To make the dimensions more user-friendly and to enable intuitive drill-down analysis, a developer will create hierarchies. An understanding of how to design and build hierarchies is a key part of the Multidimensional modeling knowledge required for the 70-768 Exam. A hierarchy is a logical, ordered structure of attributes that defines a navigation path for the user.
For example, in a Time dimension, you could create a "Calendar" hierarchy that has levels for Year, Quarter, Month, and Date. In a reporting tool, a user would see this single "Calendar" hierarchy and could easily drill down from a summary of sales by year to see the breakdown by quarter, and so on. This is much more intuitive than having to manually drag and drop the four separate attributes.
There are two main types of hierarchies. A "natural" hierarchy is one where there is a clear one-to-many relationship between each level of the hierarchy (e.g., each Year has multiple Quarters, each Quarter has multiple Months). This is the most common and efficient type of hierarchy. An "unnatural" hierarchy is one where the relationship between the levels is not strictly one-to-many (e.g., a sales management hierarchy where a salesperson might report to different managers).
Hierarchies are created in the dimension designer in SSDT by simply dragging the attributes into the hierarchies pane and arranging them in the correct order. Creating well-designed hierarchies is one of the most important things a developer can do to improve the usability of their cube.
While dimensions provide the context for analysis, the "measures" provide the quantitative data that is being analyzed. The design of measures and their parent containers, "measure groups," is a core part of the cube development process and a key topic for the 70-768 Exam. A measure is a numeric column from a fact table that can be aggregated, such as SalesAmount or OrderQuantity.
A "measure group" is a collection of measures that all come from the same fact table. When you design your cube, you will create a separate measure group for each fact table in your Data Source View. For example, if you have a Sales fact table and an Inventory fact table, you would create a Sales measure group and an Inventory measure group in your cube.
For each measure you create, you must define its aggregation type. The most common aggregation type is Sum, but Analysis Services supports many others, such as Count, Min, Max, and Average. For more complex scenarios, it also supports semi-additive measures (like inventory balances that should be summed across products but not across time) and distinct count measures.
The cube designer in SSDT provides a simple interface for creating measure groups and measures. The developer selects the fact table and then chooses which of its numeric columns they want to create as measures. The ability to correctly define these measures and their aggregation properties is a fundamental skill for a cube designer.
Just as DAX is the language of the Tabular model, MDX (Multidimensional Expressions) is the query language for the Multidimensional model. A foundational understanding of MDX syntax and concepts is a requirement for the 70-768 Exam. MDX is a powerful and expressive language that is specifically designed for querying the hierarchical, multidimensional data structures found in a cube.
While end-users in tools like Excel will not typically write MDX by hand (the tool generates it for them), the BI developer must understand MDX to create calculations, define KPIs, and implement security within the cube. A basic MDX query has a structure that is different from SQL. It uses a SELECT statement, but instead of selecting columns from a table, it selects members from dimensions and places them on axes, such as ON COLUMNS and ON ROWS.
The core concepts in MDX are "tuples" and "sets." A tuple is a coordinate that specifies a single point within the cube, defined by a combination of one member from one or more dimensions (e.g., (2025, USA, Bikes)). A set is an ordered collection of one or more tuples. An MDX query is essentially a definition of the sets that you want to display on the different axes of your result set.
While the 70-768 Exam is not a deep-dive MDX exam, it will expect you to have a basic understanding of the syntax and the core concepts of tuples and sets. This knowledge is the prerequisite for understanding how to create calculations and implement security in a Multidimensional model.
A raw cube with just the base measures from the fact table is useful, but its true analytical power is unlocked when the developer adds custom calculations and Key Performance Indicators (KPIs). The implementation of these enhancements is a key topic for the 70-768 Exam. These calculations are typically defined using the MDX language.
A "calculated member" is a new member of a dimension that is defined by an MDX expression. For example, you could create a calculated member in the Measures dimension called "Profit Margin" that is defined by the formula ([Measures].[Sales] - [Measures].[Cost]) / [Measures].[Sales]. This calculation is performed on the fly when the user queries the cube, and "Profit Margin" will appear to the user just like any other measure.
Key Performance Indicators (KPIs) are a more specialized type of calculation that is used to track performance against a business goal. A KPI is a collection of four MDX expressions. The "Value" expression defines the actual value of the KPI (e.g., Total Sales). The "Goal" expression defines the target value. The "Status" expression compares the Value to the Goal and returns a normalized value (typically -1, 0, or 1). The "Trend" expression defines the trend of the KPI over time.
Reporting tools can then use this information to display a graphical indicator, such as a traffic light or a gauge, that visually represents the status of the KPI. The ability to use MDX to create these calculations is a core skill for an advanced cube designer.
Securing a Multidimensional model to ensure that users can only see the data they are authorized to see is a critical administrative task and a key security topic for the 70-768 Exam. The security model for a cube is role-based, similar to the Tabular model. An administrator creates "Roles" and adds Windows users and groups to them. The permissions are then granted to these roles.
The most powerful security feature in a Multidimensional model is "dimension data security." This allows an administrator to restrict a role's access to specific members of a dimension. For example, you could create a role for the "US Sales Manager" and then, on the Geography dimension, specify that this role is only allowed to see the "United States" member and its descendants (the individual states and cities).
This security is defined in the "Dimension Data" tab of the role designer in SSDT. The administrator can either select the allowed or denied members from a simple checklist, or for more complex requirements, they can write an MDX expression that defines the set of members that the role is allowed to see.
When a user in this role queries the cube, all the measure values will be automatically filtered to show only the data that corresponds to the dimension members they are allowed to see. This provides a powerful and granular way to implement row-level security in a Multidimensional model.
Go to testing centre with ease on our mind when you use Microsoft MCSA 70-768 vce exam dumps, practice test questions and answers. Microsoft 70-768 Developing SQL Data Models certification practice test questions and answers, study guide, exam dumps and video training course in vce format to help you study with ease. Prepare with confidence and study using Microsoft MCSA 70-768 exam dumps & practice test questions and answers vce from ExamCollection.
Purchase Individually
Microsoft 70-768 Video Course
Top Microsoft Certification Exams
Site Search:
SPECIAL OFFER: GET 10% OFF
Pass your Exam with ExamCollection's PREMIUM files!
SPECIAL OFFER: GET 10% OFF
Use Discount Code:
MIN10OFF
A confirmation link was sent to your e-mail.
Please check your mailbox for a message from support@examcollection.com and follow the directions.
Download Free Demo of VCE Exam Simulator
Experience Avanset VCE Exam Simulator for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your free trial.
Is this valid? Can someone please confirm?
Is this valid? Can someone please confirm?