All Posts By

Matthew Oen

Power BI: Dashboard In A Day

By | Data Visualisation | No Comments

Power BI: Dashboard In A Day

Over the past few weeks, FTS Data & AI have had the privilege of hosting multiple Microsoft-sponsored Power BI: Dashboard In A Day (DIAD) events across Sydney. As a Microsoft Gold Partner for Data Analytics, the team presented over 3 days’ worth of content to a combined audience of over 100 business analysts, report developers and data professionals. Handpicked to present at Microsoft’s headquarters in North Ryde, we were able to deliver events that were both informative and beneficial for all those in attendance. For those who are curious and could not attend, this blog post covers what a DIAD event is, and why you should register to attend the next one.

L-R: Alex Gorbunov, Matthew Oen, Swetha Pakki, Sahan Vaz Goonawardhane, Ajit Ananthram

What Is It?

DIAD is a free one-day course designed by Microsoft to help analysts explore the full capabilities of Power BI. Attendees learn about Power BI in detail, follow a step-by-step lab manual, attempt 2 real-world practical examples and receive expert guidance from a team of experienced instructors. Through self-paced learning, attendees can properly develop their skills in Power BI and create a business-ready dashboard in a matter of hours.

Most importantly though, a large section of the day is devoted to providing attendees with the opportunity to develop Power BI reporting from their own datasets. This is where attendees get the most value, as they can ask questions and get assistance from experienced consultants regarding their own business’s reporting projects. The FTS Data & AI team were able to lend a hand and provide tailored advice to a vast number of businesses at various stages of their Power BI reporting journey.

Presenting to a large audience at Microsoft HQ

When Is It?

DIAD events occur throughout the year. Depending on your location, you can find dates for upcoming events by contacting us.

Attendees work on real-world practical examples

Where Is It?

DIAD events are held across the country. The Sydney events this year were held at the Microsoft head office in North Ryde. Here at the Microsoft HQ, attendees were able to fully engage with the technology, and see first-hand what Power BI and other Microsoft products are capable of, and how they could be used in their business.

See the full and latest capabilities of Power BI

How Much Will It Cost?

DIAD events run for 8 hours and are FREE. Make sure you register your interest early to ensure that you reserve a seat, as spots are limited at each event. Lunch and refreshments are also complimentary and provided throughout the day.

Learn how Power BI can be implemented at your organisation

Each attendee receives expert advice

Why Should I Go?

DIAD is an event like no other. Designed by Microsoft and delivered by professionals, DIAD empowers attendees with the skills and best practices needed to develop successful Power BI reporting solutions. Based on the overwhelming positive feedback from attendees, these events have been instrumental in getting Power BI quickly adopted in several organisations.

The opportunity to receive tailored advice from professional Power BI consultants means that you can accelerate the implementation of Power BI at your business, and be confident that you have the capability to develop powerful reporting solutions at your organisation well into the future.

See how other organisations have successfully adopted Power BI

 

If you want to know more about Dashboard In A Day events, please contact us for more information.

SSAS Tabular Optimisation In 5 Easy Steps

By | Data & AI | No Comments

SSAS Tabular Optimisation

A well-designed SSAS tabular model is often the key ingredient in a successful analytics solution. It aids the business in delivering the type of ad-hoc analysis and on-the-fly insights that drives performance improvement and economic benefit.  However, not all models are well-designed. Indeed, a poor performing cube can quickly become a burden for any organisation, negatively impacting quality of analysis and becoming a drain on valuable business resources. That’s why SSAS Tabular optimisation is so crucial for businesses wanting to get the most value out of their analytics solution.

Recently, I consulted for a large electrical merchandising business who were having some trouble with their SSAS tabular models. With national operations, it was imperative that their cubes could rapidly and reliably deliver the analysis needed for the business to confidently make strategic decisions around sales, purchasing and inventory. Memory issues and ambiguous design principles were proving to be a challenge in getting the tabular model to behave, and it was clear that I needed to tune the existing cubes with some simple optimisation techniques.

When attempting SSAS Tabular optimisation, I employ a straight-forward 5-step strategy:

  1. Examine the model and remove unnecessary tables
  2. Examine the tables, remove unnecessary columns and edit table structure/content
  3. Examine the columns and change data types
  4. Examine the DAX measures and edit expressions
  5. Examine server properties and edit memory settings

This 5-step performance tuning approach guarantees that tabular model issues can be precisely identified and appropriately addressed.

1.      Examine the Model

A concise tabular model is one that performs best. Therefore, the first step is to review the model itself. Very often a poor-performing cube contains unnecessary tables or relationships that provide no real value. A thorough review of what tables are present in the model and what value they bring will uncover what is necessary and what is redundant. Talking to stakeholders about what they need will also help determine what tables should go and what needs to stay. In my example, I was able to reduce the cube size by removing unnecessary dimension tables that I discovered the business was no longer interested in. This redesign process typically yields ‘quick-and-easy’ wins in terms of cube performance, as it is the easiest to implement.

Figure 1. Removing unnecessary tables reduces SSAS tabular model complexity

 

2.      Examine the Tables

What data actually goes into the tables will ultimately determine the quality of the tabular model. Similar to the first step, a review of the tables will often uncover unnecessary columns that do not need to be loaded into the model. For example, columns that are never filtered on or contain largely null values. Table structure is also important to tabular model performance, as it can affect how much data needs to be loaded. For example, you could reduce the row count of the sales fact table by aggregating it to be at the invoice level, instead of invoice line level. Such a reduction in size will mean that less memory is required by the cube.

Figure 2. Tidy up tables by removing columns, and reducing rows

 

3.      Examine the columns

A crucial aspect of cube performance is compression. Columns with certain data types, or have unique values for all rows will compress badly, and will require more memory. An effective optimisation technique is to correct the data type or value in a column, such that it is able to compress better. Casting values as integers instead of strings or defining decimal points are fundamental practices that are often overlooked in tabular model design, and ultimately come at the expense of performance. In my example, I was able to create a new unique invoice ID that could be used by the business and compressed as an integer. Previously the varchar invoice key was unique at almost every row of the sales table, and was compressing very poorly. The storage engine (Vertipaq) wants to compress columns, and having similar values in the same column greatly aids this. A great tool for this kind of analysis is the Vertipaq Analyzer. This tool can highlight potential areas of interest in compression activities and help track results in terms of cube optimisation techniques.

Figure 3. The VertiPaq Analyzer reveals compression pain points

 

4.      Examine the DAX

For cube users, it is critical that the OLAP queries they run return accurate results rapidly. If a user cannot get the information they need from a model in a reliable or timely manner, the cube is failing to provide the benefits expected of it. Therefore, an important part of tabular model optimisation revolves around the measures, and ensuring that the DAX expressions used are performance optimised for the formula engine. Keeping the measures simple by using basic expressions, and removing complicated filtering clauses means that the measures should perform better. In my example, I was able to change some of the expressions of sales measures at different period intervals (such as month-to-date and year-to-date), such that they could run across different filtering contexts, thus reducing calculation time.

Figure 4. Simple DAX equals better performance

 

5.      Examine the Server

Finally, the biggest factor in tabular model processing performance is the actual memory properties. Depending on the edition of the Analysis Services, there are various levels of memory limits. For the Standard Edition, the 16Gb limit imposed on a single instance of Analysis Services can often be the ‘killer of cubes’. If a reasonable business case exists, then moving to the Enterprise Edition or cloud-based solution can be the right answer to memory woes. However, there are steps that can be taken to get the best out of a SSAS tabular model without abandoning Standard Edition altogether. Increasing the amount of RAM on the server and modifying the server instance memory properties allows you to fine tune processing and reduce the likelihood of memory exception errors. In my example, the cube was failing to process as it would run out of memory during a Full Process. I increased the RAM from 32Gb to 40Gb, and reduced the Total Memory Limits in the server instance properties. With more memory and lower thresholds to which memory cleaner processes were initiated, the cube was able to process in full each time without error.

Figure 5. Fine tune the memory limits to find the optimal level of performance

 

Summary

Like any business asset, a SSAS tabular model loses value when it is not properly configured or utilised. However, with the proper approach methodology, any model can be transformed from an underperforming asset into a valuable resource for a business.

 

If you’re having trouble with SSAS tabular optimisation, we want to hear about it! Please contact us to find out about how we can help you optimise your cubes.

EMu PowerBI Collections Reporting

By | Data Visualisation | No Comments

EMu PowerBI Collections Reporting

Although they may have more interesting stories to tell, museums are no different to any other organisation when it comes to the need for management system reporting. Where most businesses require comprehensive reporting from their ERP, a museum needs to be able to extract the data from their collection management system and craft a report that effectively informs management about the status of the objects in their collections catalogue. So they needed some EMu PowerBI reporting to help do this.

FTS Data & AI were recently tasked by a museum with the delicate job of developing collections reporting from the museums’ EMu Collections Management System (EMu), a collections management system specifically designed for museums and historical institutions. This museum was undertaking an assessment program, reviewing and recording object information on their entire collection of over 100,000 historical artefacts. Monitoring the progress of this program was vital for management, as it informed them about timelines, capabilities and project resource planning.

The Problem

As previously mentioned, EMu is a collections management system specifically designed for museums. The rich data that is maintained within this system must first be extracted in order to reap the reporting rewards. However, this is easier said than done. EMu’s Texpress database engine, ODBC connectivity and ADO RecordSet connectors are not exactly conducive to PowerBI or a modern reporting outcome. So Emu PowerBI reporting is not as simple as for some systems.

Instead, a more sophisticated approach was required to extract the data. As part of the museums’ commitment to public transparency, they had developed a modern GraphQL web API that could serve collection information. Developed in-house, data was first extracted from EMu using a Harvester program which then wrote into a MongoDB cluster that serves the API. After careful examination, we identified that this API could be used to meet the reporting requirements of the assessment program. A custom query was then written, and Power BI was able to successfully connect and pull the relevant data needed for reporting. The extracted data, in JSON format, was then cleaned and transformed into a working, healthy dataset.

The Report

The report design was driven by our understanding of user workflow. An overall dashboard page, followed by a heat-formatted column chart, which then drilled through to the object details report page created a natural reporting rhythm that could be easily interpreted by report consumers:

Emu PowerBI Report
Emu PowerBI Report
Emu PowerBI Report

Custom functionality including filtering, formatting and forecasting meant that additional insights were gleaned from the dataset. We were able to not only report on the progress of the assessment program, but also provide guidance as to what objects the collections team should review and assess in the next month, proving incredibly useful in managing the resources needed in the project.

The Outcome

Starting with careful data excavation via the GraphQL API, then continuing with the injection of focused reporting design principles, we ultimately end with a sophisticated collections reporting experience that meant that the museum could leverage their existing EMu collections management system with a modern reporting tool to locate efficiencies in a large internal project.

If you have a reporting challenge and need help, please contact us to discuss how we can assist.

PowerBI ML – How to build Killer ML with PowerBI

By | AI & ML, Data Visualisation | No Comments

PowerBI ML: Unleashing Machine Learning in Microsoft PowerBI in 5 easy steps

AI and ML are key tools enabling modern businesses to unlock value, drive growth, deliver insights and outcompete the market.  Its unmatched ability to handle massive sets of data and identify patterns is transforming decision making at every level of organisations. Consequently Data and AI strategy is therefore rapidly evolving to explore the ways in which AI can be best utilised to enhance business operations. However, pragmatically harnessing AI for business needs has remained challenging. This is because the solutions offered typically incur significant resource overhead, are hard to understand and may fail to deliver actionable business outcomes. A gap has therefore emerged between BI and AI; a failure to bridge the insights we learn, with the intelligence to improve. The most recent release of Microsoft PowerBI ML features aims to eliminate that gap, by bringing in Artificial Intelligence (AI) and Machine Learning (ML) capabilities into the practical setting of self-service analytics.

PowerBI has established itself to be a vital tool in modern data analytics. The easy to use interface coupled with powerful reporting capabilities has made it the reporting platform of choice in delivering reliable business insights. The recent inclusion of ML & AI capabilities has significantly strengthened the tool, by combining easy interactivity with cutting-edge data analysis.

Overview

PowerBI ML (Machine Learning) is now possible using Dataflows, the simple ETL tool that empowers analysts to prepare data with low-or-no code. Automated Machine Learning (AutoML) is then built off the back of Dataflows, again leveraging the interactive approach of Power BI without compromising on quality of analysis.

5 Easy Steps

  1. In a Workspace hosted by Premium capacity, select ‘+Create’ in the top right corner, and select ‘Dataflows’
  2. Choose the data source you wish to run the model on:
PowerBI ML Choosing Data Source

PowerBI ML Choosing Data Source

  1. After loading the data, the familiar Power Query screen will appear. Perform any data transformations as required, and select save & close:
PowerBI ML Power Query

PowerBI ML Power Query

  1. The dataflow should now appear underneath Dataflows in the workspace. Select the dataflow, then select the brain icon, and select ‘Add a machine learning model’:
PowerBI ML Add Model

PowerBI ML Add Model

  1. Create the model by inputting the relevant information. You will get the option to select the model type and inputs for the model:
PowerBI ML Select Model

PowerBI ML Select Model

After creating the model, you will need to train it. The training process samples your data, and splits it into Training and Testing data:

PowerBI ML Train Model

PowerBI ML Train Model

Once the model is finished training, it will appear under the Machine learning models tab in the Dataflow area of the Workspace, with a timestamp for when it was Last Trained. Following this you can then review the Model Validation report (a report which describes how well the model is likely to perform), by selecting ‘View performance report and apply model’.

Lastly, you can apply the model to the Dataflow by selecting ‘Apply model’ at the top of the validation report. This will then prompt a refresh for the Dataflow to preview the results of your model. Applying the model will create new entities (columns) in the Dataflow you created. Once the Dataflow refresh is completed, you can select the Preview option to view your results. Finally, to build reporting from the model, simply connect Power BI desktop to the Dataflow using the Dataflows connector to begin developing reporting on the results of your machine learning model.

Outcomes

With machine learning now integrated with PowerBI, users can upgrade from reporting on business performance to predicting it. From a business perspective, the addition of ML means that PowerBI reporting has gained an extra dimension. It can easily be incorporated into existing reporting and is capable of dramatically changing decision making. For the PowerBI ML user, no new skills are required, as ML leans heavily on the existing interface and user experience.

Common use cases where machine learning in PowerBI can be readily implemented include:

  • Improving your existing PowerBI CRM reporting by creating a general classification model to identify high and low value customers.
  • Boosting the value of your financial reporting by developing a forecasting model to help predict sales trends and downturns.
  • Enhancing your asset reporting by building a regression model to calculate the probability of asset failure or breakdown.
  • Refining your CRM reporting by constructing a binary prediction model to determine the likelihood of a customer leaving or staying.

If you want to know how machine learning can be implemented in your organisation, please contact us, and ask us about our AI services.

 

 

 

An Automated BI Solution: Microsoft Flow

By | Data Visualisation | No Comments

Simple automation can often deliver big improvements in the context of BI solutions. A great example of this is when using Microsoft Flow, OneDrive for Business and Power BI. As part of a broader Data & AI strategy, the combination of these applications can deliver an impressive automated result, dramatically increasing the value of deploying simple BI solutions.

Opportunities for automation in simple BI solutions are often overlooked, as the perceived cost of such a project would outweigh the benefits. However, good data platform delivery involving rigorous assessment of business processes can help identify instances where automation will deliver scalable value. As a real-world example, a client had a set of Excel files that were sent each week from their ERP system via email. And someone would take those Excel attachments and manually perform some transformations in the files, before importing to Power BI and hoping that the reports would turn out ok. Such a process was not only time consuming, but also riddled with potential manual error, ultimately failing to deliver the insights and value that they were after in a BI solution.

How to Setup the Flow

Enter Microsoft Flow and OneDrive for Business. Flow is an application that helps automate tasks by integrating workflow between different applications. In our case, implementing Flow was clear:

  1. Create a Flow that automatically saves the Excel attachments from those emails into OneDrive for Business (whilst performing some renaming and archiving along the way for good measure)
  2. Connect Power BI to those files in OneDrive

Designing the flow was simple, requiring only basic information such as:

  • Where the email was being sent from
  • The subject of the email
  • What to name the Excel file
  • Where to save the file

The Flow would then take this information into its ‘trigger’ and ‘action’ steps, forming a logical and repeatable workflow.

Once the flow was set-up and Power BI connected to OneDrive for Business, the previously manual process to deliver key business reporting was transformed into an entirely automated one. Time lost to handle-and-wrangle data was now able to be spent on core value activities and enhancing data visualization.  Higher quality reporting was now delivered, granting previously unseen insights into the business.

 

Understanding where automation can provide value in BI solutions is crucial, particularly in simple BI projects. As evidenced above, a simple combination of Microsoft Flow, OneDrive for Business and Power BI can be the difference between success and failure for a BI solution.

Did you enjoy this article?
Signup today and receive free updates straight in your inbox. We will never share or sell your email address.
I agree to have my personal information transfered to MailChimp ( more information )