Unraveling the Microsoft Analysis Services Deployment Wizard

Admit: either you love it or you hate it. And I must confess I was part of the last group, but now the wizard have convinced me to love it despite some caveats. So what did me switch sides? First lets look into the working of the wizard and how we can use it in a Continues Integration environment.

The Wizard

First we all know the wizard as it is installed together with SQL Server and probably started it to see what the capabilities are. But by default a wizard interface is not something we can use directly in a CI environment. But luckily the wizard can be started from a command prompt with some parameters to suppress the GUI. See here for the documentation. Basically the wizard accepts three different parameters:

  • The /a switch to run the wizard in ‘answer mode’. This will write all provided deployment settings to the model files.
  • The /o switch to create an XMLA output file with all the provided deployment settings.
  • The /s switch to run the wizard in silent mode. This will deploy the model with the stored settings of the files.

But despite all the settings and documentation, it is not clear what all the capabilities are of the wizard. So mostly use the option to create an XMLA file and during the deployment to the server(s) data connection settings are replaces with e.g. PowerShell. I have deployed a lot of cubes/models this way.


But my current assignment at a bank, it is necessary to deploy and run everything via the principle of least privilege. And in my case I have to use a dedicated user for the database connection and also deploying via a dedicated (different) user via a CI process. So I started looking and the real capabilities of the deployment wizard.

First thing I noticed is with a build in Visual Studio three files are created for a Tabular model, Model.asdatabase, Model.deploymentoptions and Model.deploymenttargets. The first file contains the structure of the database, the second one the different deployment options, like if the database will be processed after deployment and what do we do with existing roles (merge or overwrite). The third file contains a connection string to the deployment server and the name of the database and model/cube name.

If you build a standard multidimensional solution, you will get four files. The three mentioned above and a ‘.configsettings’ file. And in this file we can define a different connection string for the data connection in the database. So this is our friend because despite which server/database is used during development, it is possible to overwrite it with a different one stored in the ‘.configsettings’ file. And because a Tabular model is based on the same model (BISM) as a Multidimensional cube, we can use the ‘.configsettings’ file is both scenario’s.

But the wizard is capable of consuming two more files: a ‘.assecurityinformation’ file and a ‘.asassemblylocations’ file. Both files are optional and are only used if they are available with the same name of the ‘.asdatabase’ file. And the first contains encrypted security settings regarding database connections and the second one contains assembly information on where to store the embedded assemblies. I must confess that this is the part I didn’t investigate as I have never used custom assemblies as part of a database in my life.

So if we provide the wizard a ‘.asdatabase’ file name as input, the wizard is loading the following files if available: ‘.deploymentoptions’, ‘.deploymenttargets’, ‘.configsettings’, ‘assecurityinformation’ and ‘.asassemblylocations’. These files can provide all information needed for a fully automatically deployment scenario.


As shown above the wizard is capable of deploying a solutions fully automatically by providing it with the correct information in a set of files for a given environment. But ideally I like deploying my solutions via the principle of build once and deploy many. So I will build my solution once and deploy the build output to the different DTAP environments using different parameters.

I we look at deploying SQL Server databases for instance: create one ‘.dacpac’ file and deploy it via ‘sqlpackage.exe’ with a different profile file for each environment. But regarding the deployment wizard, I can store different environment settings in a ‘.configsettings’ file, but I cannot provide a specific one as a parameter at runtime.

This gave my the idea to wrap the wizard and accept a ‘.configsettings’ file as a parameter. Via ILSpy it is possible to decompile the wizard code and luckily the code is not obfuscated and only the methods of the wizard self are not exposed. So we can take a look at how the wizard works and wrap the official methods with a custom application.


With this idea I created an application capable of delivering the needs I had: AnalysisServices.Deployment. And as an extra option I added the capability to create a ‘.assecurityinformation’ file to store a username and password for a model datasource connection. This file is encrypted using information from the current user and current machine. So it is not possible to reuse this information file easily. I have another idea to create a custom security file which encrypts this information using a master key, so it can be exchanged easily.

The solution uses the same files and same logic as the official wizard. The syntax is at this moment:

/d             Deployment mode
/m:[filename]  Model definition
/t:[filename]  Deployment target
/o:[filename]  Deployment options
/c:[filename]  Deployment config
/s:[filename]  Deployment security
/a:[filename]  Deployment assembly

/i             Impersonation mode
/ds:[ID]       Datasource ID
/u:[username]  Impersonation username
/p:[password]  Impersonation password
/f:[filename]  Export filename

/?, /h         This help

There are two modes: deployment and impersonation. With the /d switch the deployment mode is triggered. And the /m switch is required. If no other switch are provided the application will look for the remaining files sharing the same basepart. If a file is provided with the correct switch, the application will use that file. So this will provide the option to deploy the same file to a different server with a different datasource connection by simple provide different parameters. At this moment the ‘.asassemblylocations’ file is not yet supported.

For the impersonation mode the /i switch will create a file using the provided ID together with the username and password. This file is at this moment only capable of storing an username and password for one datasource connection.

Future enhancements

At this moment I only tested it with SQL Server 2014, and I will test it with SQL Server 2012 and if needed change the logic. Other enhancements are in random order:

  • implementing the ‘.asassemblylocations’ file
  • adding multiple datasource connection username/password combinations
  • adding impersonation logic for the deploymentserver connection
  • adding custom encryption with a master key for easy exchange


Posted in Out the field | Tagged ,

Pushing data thru Azure Stream Analytics to Power BI – part 3

Note: see http://azurebi.jppp.org/2015/09/29/pushing-data-thru-azure-stream-analytics-to-power-bi-part-4/ for some new insights.

After the first two parts (1, 2) now it is time to look at the code to push data to the event hub and process the data from the event hub and push it to the Power BI API. Initially my idea is to use the Microsoft Band sensor to be pushed to Power BI, so I use for that purpose a HeartBeat class as records to be pushed.

Basically we need to code the following steps:

Sending data to event hub

Before we can send data to an Azure event hub we first need the connection information. In the Event Hub the is an option to retrieve the connection information at the bottom Connection Information. This will popup a screen with the connection string for the different defined policies.


To access the event hub in C# we need to create a EventHubClient with the connectionstring and event hub name as parameters. After initializing the object we can fire a SendAsync method to send to event to the event hub. In this case we are creating a random heartbeat and serialize an initialized HeartBeat class to JSON as that is the in and output of the configured Stream Analytics. To initialize the event sending we can define a Task and call that task via the Wait() method.

static async Task SendingRandomMessages()
  var eventHubClient = EventHubClient.CreateFromConnectionString(eventHubConnectionString, eventHubName);
  while (true)
      List<HeartBeat> heartbeats = new List<HeartBeat>
        new HeartBeat{EventDate = DateTime.Now, HeartBeatValue = new Random().Next(80, 150)},
      string message = heartbeats.ToJson(JavaScriptConverter<HeartBeat>.GetSerializer());

      await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));

      Console.WriteLine("{0} > Sending message: {1}", DateTime.Now.ToString(), message);
    catch (Exception exception)
      Console.ForegroundColor = ConsoleColor.Red;
      Console.WriteLine("{0} &gt; Exception: {1}",
      DateTime.Now.ToString(), exception.Message);
    await Task.Delay(200);

Reading data from event hub

First we need some basic configuration settings. Besides the event hub policies, retrieved via the same way as the sending event hub, we need the corresponding storage account its name and access key to create a connection sting.

With that information we can register a EventProcessorHost with the information from above and start the receiver with calling the RegisterEventProcessorAsync method with an IEventProcessor class as type.


To read the events from the event hub you need to define in the IEventProcessor class the IEventProcessor.ProcessEventsAsync method to process the received messages.

async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
  foreach (EventData eventData in messages)
    //Process Recieved Message
  if (this.checkpointStopWatch.Elapsed > TimeSpan.FromSeconds(10))
    await context.CheckpointAsync();

Pushing data to Power BI API

I started with downloading the Power BI API sample for .NET from here. This gives a basic idea of communicating with the API. If you do not have played with the API, please take a look at the sample and the way the API is working. Especially on how the authorization and authentication is working (https://msdn.microsoft.com/library/dn877544).

The biggest problem with the sample is that generic code and application specific code are mixed. For this sample that is not a problem, but to reuse the code for my application I had to rewritten a big part of the code. I converted the class with static methods to a generic class and methods without any specific application logic, like table structures. All that information has to be set during when the class is initialized. Also I changed the logic on how the AccessToken is used. In my code it is possible to retrieve the token after the class is initialized and be set when it is reinitialized. This last is important, because if we want to use it in combination with an Event Hub, multiple instances are created due to the partition size. And if we can eliminate the need to reauthorize the application over and over again, it would help.

Additional changes to reading events

After the change of the code to be usable in combination with an Event Hub, the reading of the code needs to be changed to be able to initialize the PowerBIHelper class with the correct AccessToken to eliminate an extra authorization. Problem with the previous way of initializing the event hub processing class is that it is instantiate under the hood when it is registered.

If we want to custom instantiate the helper class we need to use an IEventProcessorFactory which will instantiate the IEventProcessor classes to process the events. After initializing the factory with the preset parameters needed for the Power BI helper class, we can register the factory with a EventProcessorHost instance. When the factory is registered the event CreateEventProcessor is called multiple times regarding the amount of partitions which provides a custom initialized IEventProcessor class.

host = new EventProcessorHost(eventProcessorHostName, EventHubName, EventHubConsumerGroup.DefaultGroupName, EventHubConnectionString, storageConnectionString);
PowerBIEventProcessingFactory PowerBIEventProcessingFactory = new PowerBIEventProcessingFactory(ClientID, DataSetName, TableName, JsonTableSchema, Token, DataSetID);
await host.RegisterEventProcessorFactoryAsync(PowerBIEventProcessingFactory);
IEventProcessor IEventProcessorFactory.CreateEventProcessor(PartitionContext context)
  return new PowerBIEventProcessing(clientID, dataSetName, tableName, jsonTableSchema, token, dataSetID);

After registering the event processing part, the real processing of the events triggers an event that will write the collected json to Power BI. The json that is returned by the event hub, is the original HeartBeat class information (because Stream Analytics doesn’t alter any column structure) wrapped in an event hub class. We need to deserialize that json to an custom object to get the included HeartBeat classes.

public class EventMessage
  public List<HeartBeat> rows { get; set; }
  public DateTime EventProcessedUtcTime { get; set; }
  public int PartitionId { get; set; }
  public DateTime EventEnqueuedUtcTime { get; set; }
List<EventMessage> eventMessages = JsonConvert.DeserializeObject<List<EventMessage>>(Encoding.UTF8.GetString(eventData.GetBytes()));

In the snippet above the eventData is one of the messages that is returned by the event hub. After we have it deserialized to a known object we can read all the rows to read and store those in Power BI via the API.

See the attached code for more details: the code.

Next Steps

  1. Introduction: https://azurebi.wordpress.com/2015/03/30/pushing-data-thru-azure-stream-analytics-to-power-bi-part-1/
  2. Configuring Azure services: https://azurebi.wordpress.com/2015/04/02/pushing-data-thru-azure-stream-analytics-to-power-bi-part-2/
  3. Sending, reading and processing the events: https://azurebi.wordpress.com/2015/04/14/pushing-data-thru-azure-stream-analytics-to-power-bi-part-3


Posted in PoC | Tagged , | 3 Comments

Pushing data thru Stream Analytics to Power BI – part 2

Note: see http://azurebi.jppp.org/2015/09/29/pushing-data-thru-azure-stream-analytics-to-power-bi-part-4/ for some new insights.

In part 1 I provided an overview of the given solution. And now it is time to configure Azure Service Bus, Event Hubs and Stream Analytics to be able to sent and receive messages. I still be using the ‘old’ Azure Management Portal as I cannot use the new portal: I always get lost in that one.

The steps that needs to be done:

Configure Azure Service Bus

Create a new namespace

To use the Azure Service Bus and their Event Hubs in combination with Stream Analytics we first need to create a new namespace by clicking on Create a new namespace in the Service Bus page in the Azure Portal. One important thing is that it needs to be a Standard messaging tier. This is because we need to define a different Consumer Group as Stream Analytics cannot connect to the $Default group.


Create a sender event hub

After we have created the namespace the Event Hubs can be created. This can easily done by using the standard way of creating objects in the Azure Management Portal.


By default an event hub has 16 partitions and a retention of 1 day. There is no need to change this in this case, but in a real life scenario these values can be altered to satisfy the business need.

Define access policy

To access the even hub externally access policies need to be created. The direction of the policies are based on the program/apps that is using the  event hub. By clicking the arrow left of the event hub name we can configure the event hub. In the configure tab we need to add a policy and set the permission to Send.


Create a custom consumer group

To use the sender event hub (PowerBIEventHub) with Stream Analytics we need to create a new Consumer Group. This can be done via the Consumer Groups tab next to the Configure tab.


Create a receiver event hub

For the receiver event hub (PowerBIReceiver) we can create it via the same way as the sender event hub. Only this one doesn’t need a custom consumer group and the permission of the access policy needs to be set to Listen.


Configure Azure Stream Analytics

Now that the event hubs are set, Stream Analytics can be created and configured. This is pretty simple. Stream Analytics needs a storage account, but this can be done while creating a Stream Analytics.


Stream Analytics is basically a pipeline with an input and a output and in between a query defines which records of the input flows to the output. In our case we are linking the event hubs as in- and output of this stream via the Input and Output tab


Connect the sender event hub as input

In the Input tab we can easily add an input and a wizard pops up. In the first step choose Data Stream


In step 2 choose Event Hub as we are connecting our event hub to Stream Analytics


In step 3 we need to provide all the input event hub information. In this step the consumer group is needed we created before and it will not accept the $Default group as special characters are not accepted. The Input Alias is needed while defining the query; it is the ‘table’ used to select information from by the stream query.


And in the last step (4) choose JSON and UTF8 as we will send this information to the sender event hub.


Connect the receiver event hub as output

Adding an output is almost the same as adding an input: after clicking the add output option a wizard pops up. In the first step we select Event Hub as this is the only option we can easily use to process the data directly. (Note: in the UserVoice suggestions there is already a request to add an API output with status ‘Under Review’…)


And in the final step, number 2, we provide the receiver event hub information. There is no need to configure the advanced settings in this case.


Define the event processing query

Now that we have a input and an output, we need to define a query to retrieve the input information and add it to the output. This is done by using the query tab. In my case I use a very simple query that is just putting all the input unfiltered to the output. The query has a SQL-like syntax and uses two ‘tables’, namely the input and output name provided by the different wizards.


After we created the query we can test it by uploading a JSON file and look at the output. If everything is set correctly we can start the stream by clicking the play button and the Azure service are setup and working correctly and ready to use by the sending and processing app.

Next Steps

  1. Introduction: https://azurebi.wordpress.com/2015/03/30/pushing-data-thru-azure-stream-analytics-to-power-bi-part-1/
  2. Configuring Azure services: https://azurebi.wordpress.com/2015/04/02/pushing-data-thru-azure-stream-analytics-to-power-bi-part-2/
  3. Sending, reading and processing the events: https://azurebi.wordpress.com/2015/04/14/pushing-data-thru-azure-stream-analytics-to-power-bi-part-3


Posted in PoC | Tagged , | 3 Comments

Pushing data thru Azure Stream Analytics to Power BI – part 1

Note: see http://azurebi.jppp.org/2015/09/29/pushing-data-thru-azure-stream-analytics-to-power-bi-part-4/ for some new insights.

Since the beginning of this year a new version of Power BI has been release in preview and since half March this version is available worldwide. One new feature of this preview is that there is now an API to push data programmatically to Power BI. While pushing this data the dashboard based on the dataset is altered in real time giving a ton of opportunities.

Azure Stream Analytics

One of the advantage of using Stream Analytics between the sender and the Power BI API is that the throughput of the API is limited and can be increased by using Stream Analytics. Another advantage is that Stream Analytics can be used to filter the events and we can use multiple sender apps to send events to the service hub and be processed by a single processor app and both will reduce the amount of API calls.

With the current versions of Stream Analytics I have to use a processor app to consume the output of the stream and the Power BI API, because the only usable output is via an event hub. Maybe in a future release there will be a option to direct output to an API. In my case I will use the processor app to do some extra aggregation to reduce the number of call to the Power BI API.

App Concept

During the last SQLBits Kasper de Jonge (PM of Power BI) showed the possibility to use Azure Stream Analytics. And the developer site of Power BI also mentions the possibility, but as ‘coming soon’. But I couldn’t wait and tried it my self and got it working. In a series of blogposts I will provide a guide to achieving this yourself.


The idea is quite simple. There are two apps a sender (Power BI App) and processor (Power BI Process App) and a Stream Analytics pipeline connected to Azure Service Hub as input and output. And the last step is that the processor app will feed the Power BI Dashboard via the API.

What do we need?

First we need an active Azure subscription and be part of the Stream Analytics preview, you can apply via here. Also we need an active Power BI subscription which is linked to your Azure Active Directory of your subscription. See this link for more information.

Next Steps

  1. Introduction: https://azurebi.wordpress.com/2015/03/30/pushing-data-thru-azure-stream-analytics-to-power-bi-part-1/
  2. Configuring Azure services: https://azurebi.wordpress.com/2015/04/02/pushing-data-thru-azure-stream-analytics-to-power-bi-part-2/
  3. Sending, reading and processing the events: https://azurebi.wordpress.com/2015/04/14/pushing-data-thru-azure-stream-analytics-to-power-bi-part-3


Posted in PoC | Tagged , | 2 Comments

Multiple hierarchies and xTDs

Last week I was challenged by a co-worker: ‘create a way that a tabular cube case serve a YTD value for different time hierarchies.’ Because I work at a retailer at this moment, there is a fiscal calendar and a 454 calendar, which of course doesn’t align in the begin and end date.

While I was writing this post, Marco Russo wrote a great post on how to create different xTD values. Please read it: http://www.sqlbi.com/articles/week-based-time-intelligence-in-dax/ (Note to myself: do not postpone the writing of your posts)

This has one advantage for me: I don’t have to explain the logic behind xTD who are not based on a standard calendar. Lucky me :-).

The only challenge left is the one measure with different hierarchies, but that can easily been solved by using to Date tables in PowerPivot. One table containing the standard calendar and one containing the year-week data. Both will get a hierarchy and are linked to the same fact table using the same date key.

Diagram view

And in the fact table two values are defined: YTDMonth and YTDWeek using both a different date table. And the combined is a measure with a IF() statement showing one of the two YTD values.

YTD:=IF(HASONEVALUE('Calendar Week'[Year]), [YTDWeek],[YTDMonth])

Basically it is the same situation as there was with Analysis Service 2000: per dimension one hierarchy. And as long as DAX is not aware of the dimension that is selected or scoping, this design paradigm should be used to solve this problem. Or stick with a value per xTD.


Test file: Multiple hierarchies with one YTD measure

Posted in PoC | Tagged , ,

Adding #Trendlines to #PowerPivot for #PowerView

It started during a presentation of Cathy Dumas at the latest SQLBits where she compared the different charting capabilities of Excel, Power View, PerformancePoint and Reporting Service. One of those capabilities were trendlines who are missing in V1 of Power View. She suggested to add a trendline as a measure, but didn’t succeed, so she raised a challenge. And here I am taking this challenge to add a trendline to a PowerPivot model to be used in Power View.

Trendline theory

The theory behind a linear trendline is minimizing the least-square of the given set. The Wikipedia page gives a lot of theory and I must admit that the Dutch version has besides the theory also a simple solution of solving the equation.

Given a trendline with the following equation:


Then a and b can be solved by using the following equations:



For the following dataset:

x y
1 10
2 1
3 2
4 7

The following base values can be calculated and are input for a and b:

n 4
E(x) 10
E(y) 20
E(x*y) 46
E(x2) 30

So a and b are now:

a = –0.8
b = 7

And the trendline is now y = –0.8x + 7. With this formula it is possible to calculate a trendline value for each x of the dataset, which gives a nice trendline as a extra line chart.

Plot this trendline with the original data and the build-in trendline in Excel, the two line are overlapping each other:



To convert this theory to a real life scenario I use the AdventueWorks database with the FactResellerSales fact table, the DimProduct tables and the DimDate table. With this dataset I ran into some challenges. First challenge is the series – I use a combination of Years and Quarters, like (20051, 20052, …) – which is not linear and continues. Also the data is not on the lowest grain of those series values.

The first step will be to convert the used series to a usable series for minimizing the least-square. Next step is adding all the needed measures to the FactResellerSales table to calculate the correct a and b.

Converting the Series

I created a nice long date series by adding a calculated column to DimDate YearQuarter

=[CalendarYear] & [CalendarQuarter]

Next I added this calculated column to the FactResellerSales table by using a calculated column RelYearQuarter


Problem is that this series is not nicely formed as need for a correct calculation, so a conversion is needed. The current version of PowerPivot has the function RANKX which can be used to rank our series. The default order of RANKX is ascending, but a descending order is needed. Also we need a dense ties to get a continues series. This will give as a calculation for the column RankYQ

=RANKX(ALL(FactResellerSales), [RelYearQuarter], [RelYearQuarter], 1, Dense)

Calulation the Base Values

Most of these values needs to be calculated against the full selected set. Therefor most of the calculation have ALLSELECTED(FactResellerSales) included to make sure that the measure is calculated against the selected dataset creating the PivotTable.

n : NrSeries

To calculate the number of series value a maximum of the newly RankYQ column should be enough. Problem is that RankYQ is calculated again the complete set of the data, ignoring all PivotTable filters. So when the first series value is not selected in the PivotTable, the RankYQ will remain the same and is not recalculated. One solution is to convert the column to a measure, but this will cause a huge performance impact if a maximum is calculated. The best way is to correct the value of the maximum of RankYQ by subtracting the minimum of RankYQ value and adding 1 for the right offset. In this case the total number of series – NrSeries – will give the correct value for a continues selection of the series:

=MAXX(ALLSELECTED(FactResellerSales), [RankYQ])-MINX(ALLSELECTED(FactResellerSales), [RankYQ])+1

E(x) : SumSeries

To calculate the total sum of the series values the best way is to revert back to basic mathematics knowledge. What we need for our sum of the series is for a given n:


Which can be calculated  for that same n as:


So the measure SumSeries can be calculated as follows


E(x2) : SumSeries2

With the same mathematical knowledge we can solve the sum of our series values squared:


The measure SumSeries2 can be calculated as follows


E(y) : SumAmount

This is the most easiest calculation we need as it is a standard way to calculate the total of the selected values, here named SumAmount

=SUMX(ALLSELECTED(FactResellerSales), [SalesAmount])

E(x*y) : SumRankSales

To calculate this measure the same trick is needed as with the NrSeries measure: The selection of the series can start not as the first value, so it is not possible to simply multiply the RankYQ value with the Sales Amount. For this value also the minimal selected RankYQ value of the series needs to be subtracted of the current RankYQ value and corrected with 1. So the SumRankSales measure is calculated as follows

=SUMX(ALLSELECTED(FactResellerSales), ([RankYQ]-MINX(ALLSELECTED(FactResellerSales), [RankYQ])+1)*[SalesAmount])

a and b

With all the base values in place a and b can be calculated as follows. In the calculation of a there is a check for division by zero (0).

=IF((([NrSeries]*[SumSeries2])-([SumSeries]*[SumSeries]))=0, BLANK(), (([NrSeries]*[SumRankSales])-([SumSeries]*[SumAmount]))/(([NrSeries]*[SumSeries2])-([SumSeries]*[SumSeries])))


SalesAmount Trend

With a and b now correctly calculated we can calculate a measure – SalesAmount Trend – that gives the trendline usable for Power View. A IF statement is included to make sure that the trendline is only calculated when there is a original value of SalesAmount available. And the current series value is calculated using the current RankYQ value minus the minimal selected RankYQ value (and a correction of 1 to complete the calculation).

=IF(ISBLANK([Sum of SalesAmount]), BLANK(), [a]*(MAX([RankYQ])-MINX(ALLSELECTED(FactResellerSales), [RankYQ])+1)+[b])

This trend value can now be used to validate with Excel if it shows the correct trendline.



Of course there are some limitation to this approach, which aren’t there if Power View supports trendlines natively:

  • The value of series needs to be a continues series, so no gaps. Is this case the trendline will be off if 2006 is deselected and  2005 and 2007 are selected.
  • The Trendline is only valid for the given series and values.
  • Only a linear trendline is calculated at this moment. For Exponential and Logarithmic trendlines the calculation is almost the same, but maybe I blog about this some later time.
  • In Power View it is not possible to combine a bar chart with a line chart, so if you need to show your data as bars this trendline will also show as a bar chart.


With all the intermediate measure in the model, some hiding is needed before it can be really used in Power View. But most of that is needed anyway, because of the different nature of PowerPivot and Power View.

There is probably a better and faster way to determine the total distinct series values. If anyone has a good suggestion: please let me know.

Unfortunately I don’t have a working Power View installation, so I cannot show the real result. But you can try it yourself by download the sample Excel sheet from here.

Enjoy trendlining


Posted in PoC | Tagged , | 2 Comments

The beauty of the #SSAS DefaultMember

imageThe DefaultMember property for the dimension is know by every Microsoft BI professional.  This property sets the default selected member of a dimension attribute, so that member will be selected by default.

The dimension DefaultMember can be declared in BIDS or by using an Alter Cube statement as part of the cube calculations. While the first is static the latter can be used to dynamically sets a DefaultMember. The DefaultMember can also be set by defining a role, very useful for dynamic role security.image

But the dimension DefaultMember property isn’t the only one. Almost every object in Analysis Service has a DefaultMember property, but in this blogpost I only look at the DefaultMember of the Measures ‘dimension’.

Determination of the DefaultMember

The big question is: how is the DefaultMember property determined when it is not set? If we take a look at the BOL there is a remark for the DefaultMember property. (Although the article is referring to Analysis Service 2005, the behavior hasn’t changed):

If DefaultMember is not specified, or is set to an empty string, then Microsoft SQL Server 2005 Analysis Services (SSAS) chooses a member to use as the default member.

But what if the default behavior of choosing the DefaultMember by Analysis Service: it chooses the first member which can be found. So if we query a DefaultMember of a hierarchy, the All level is returned.


And if you query an attribute with the property IsAggregatable is set to false, the FirstChild is returned, like the Scenario attribute in the AdventureWorks cube.


But there is one very important DefaultMember that helps the overall experience of Analysis Service. This is the DefaultMember of the measures.

DefaultMember of the Measures

What will a query return without a Measure added? In this case the Query Engine of Analysis Service is adding the DefaultMember of the Measures ‘dimension’, the DefaultMeasure. If we look at the AdventureWorks cube this value is set to the Reseller Sales Amount


Or when not set, Analysis Service will take the first Measure from the first Measure Group, in this case the Internet Sales Amount. You can manually remove the DefaultMeasure without processing the AdventureWorks cube.


But how can this behavior impact the overall experience? When we select in Excel a dimension attribute at the rows or columns, the following query is fired against Analysis Service.


This is a query without a measure included, so Analysis Service is adding the DefaultMember (DefaultMeasure) to the query as described above.

So this could result in a performance delay when this DefaultMeasure is part of a Measure Group with a lot of partitions and data. Then it can take a long time before the members of the dimension attribute is shown in Excel.

A simple way to solve this is to either set the DefaultMeasure to a measure from a small Measure Group or move that Measure Group up to be the top of the Measure Groups. In this case the measure less query will only hit the partitions of that small Measure Group.


-JP (@jppp)

Posted in Out the field | Tagged