All Posts By

Matthew Oen

Azure DevOps Power BI Reporting

By | Data & AI | No Comments

Azure DevOps (ADO) is fast becoming the application lifecycle management tool of choice for modern organisations. With boards, CI/CD pipelines and Git repo capabilities, Agile practices have never been so easy to implement in project management. However, as a DevOps tool, it is understandably not designed and equipped to be a fully-fledged reporting and analytics solution as well. Luckily, Power BI can be used to integrate with ADO to deliver the kind of enterprise reporting that project managers need to properly monitor their projects. This blog post covers Azure DevOps Power BI reporting and also some examples of what kind of reporting is available.

Connecting to ADO

A connection to ADO is made possible via the OData feed option available in Power BI. Once connected, you will need to select the relevant tables to begin building a data model. For most project managers, the main objective in ADO reporting is getting clear visibility of the progress of work items. For that reason, the tables imported into the model should contain information relating to Work Items and Iterations.

Once imported, some simple transformations are required to clean the data. It is crucial that the correct relationships are created between work item tables. This is because work items in ADO are hierarchical, starting with Epics, Features, User Stories and Tasks. This hierarchical logic must be captured in the model in order for the reporting to make sense.

Once the model has been created, the report visualisations can be built. Based on experience, a clearly constructed table outlining key work item information including Sprint, Epic, Feature, User Story, Task, Assigned To, Completed Date, Task Number and Sprint Percentage is precisely what project managers want to see. Although not the most visually compelling report, these tables  clearly articulate work progress in a single view, something not easily achieved natively within ADO.

What Reporting Is Available

As mentioned previously, ADO reporting is primarily concerned with reporting on the progress of work items. However, the OData feed is able to capture most of the ADO backend, meaning that additional reporting on things such as pipelines and test results is also possible. Some typical reporting examples include:

  • Sprint progress reporting
  • Resource burndown and capacity
  • Work item cycle time
  • Work item predictability and productivity
  • Task completion forecasting
  • Work item distribution
  • CI/CD pipeline failures
  • Application testing and release results

Virtually any reporting can be custom built using Power BI and the OData feed.

Developing Power BI reporting for ADO is also useful because of its scalability. The OData feed can be re-pointed to any ADO instance, meaning that your reporting can be easily reproduced in other ADO instances. At FTS, we offer a pre-packaged report that can be easily implemented in any instance. It contains the most relevant reporting out of ADO based on our experience and has been very useful for us in managing our projects.

Finally, Power BI reports can be easily embedded back into ADO via the native web-embed functionality. A dashboard in DevOps must be first created, and include an iframe dashboard widget. Then the Power BI report can be embedded into the widget in the dashboard, thereby allowing you to view your custom reporting within the DevOps browser. This ability to embed reporting elevates ADO into being an all-in-one development management tool, greatly assisting project managers keep track of resources and progress.

If you want to begin Azure DevOps Power BI reporting, please contact us for more information.

Power BI Dataflows: New and Improved

By | Data & AI | No Comments

If you attended one of our Dashboard In A Day events earlier this year, you would have seen a brief demonstration of Power BI Dataflows, and what they can mean for an organisation. With the recent Microsoft update of Dataflows, now is a good time to familiarise yourself with this feature and learn how you can leverage it to improve the data culture in your organisation.

What Are Dataflows

If you have worked with Power BI before, then you are familiar with Power Query, which is the tool used for extracting, transforming, and loading data into a data model. Power Query allows you to connect to a variety of data sources and perform detailed transformations to manipulate data into the desired format needed to perform analysis.

Dataflows is an extension of this, in that it allows you to create these Power Query transformations and make them available across your organisations for repeatable use. This is important for two reasons:

  1. It scales data preparation, and eliminates the need for users to perform transformations again and again.
  2. It introduces a layer of governance in centralising and standardising data preparation assets.

Dataflows gives users access to clean, transformed data that they can rely on and re-use. This is vital in supporting self-service analytics in an organisation, as it provides users with the platform needed to access reliable and pre-configured data assets.

New Capabilities

Power BI has now introduced Endorsement capabilities into the Dataflows feature. Dataset endorsement capabilities have already been in use for some time and have proven very useful in establishing quality data culture in an organisation. With this capability now extended into Dataflows, quality data assets can now be more easily identified and shared across an organisation. Per the Endorsement principles, Dataflows can be marked for Promotion or Certification.

Promotion – tells users that the dataflow owner believes that this dataflow is good enough to be shared and reused. Users will need to have confidence in the dataflow owner to trust the quality of the dataflow.

Certification – tells users the dataflow has passed internal tests for quality per organisational policy. Only specified users are authorised to mark Dataflows as Certified.

Certified and Promoted Dataflows are marked with badges when users attempt to connect to them in Power BI Desktop:

This identification means that users can easily see which dataflows they should use to connect to when preparing reporting or analysis.

Why It’s Important

Endorsement is an important step in making Dataflows an enterprise-ready feature. With endorsement, an organisation must adopt a policy for reviewing and certifying data preparation assets. The introduction of this policy greatly improves the quality of data in an organisation, as only certified dataflows are being used for reporting and analytics outcomes.

Organisations that wish to promote a self-service environment will also benefit greatly from endorsed dataflows, as it reduces the need for dedicated resources to create and control data access. Instead, quality data assets can be centrally managed via Power BI and made available to the organisation to connect to and use. Users can rely on the quality of data, and do not need to perform any additional tasks to cleanse the data to get it ready for their analysis.

If you want to know how Dataflows can be used in your organisation, please contact us for more information.

Power BI: Dashboard In A Day – Jan 2020

By | Data Visualisation | No Comments

FTS Data and AI kicked off 2020 the right way by hosting another Power BI: Dashboard In A Day (DIAD) event in January. With more events scheduled in the coming months, this post recaps the January event, the feedback we got and what you can expect from the next DIAD event.

What Is It?

As previously mentioned, DIAD is a free one-day course designed by Microsoft to help analysts explore the full capabilities of Power BI. Attendees learn about Power BI in detail, follow a step-by-step lab manual, attempt 2 real-world practical examples and receive expert guidance from a team of experienced instructors. Through self-paced learning, attendees can properly develop their skills in Power BI and create a business-ready dashboard in a matter of hours. No matter what their capabilities are, the workshop is designed to be accessible to everyone in attendance.

“Great presentation, great content and very useful hands-on labs. I would definitely recommend this.”

The January event this year was loaded with attendees from a diverse range of industries and roles. From business analysts to IT managers (with a few engineers and accountants in the mix), DIAD was a hit for all those in attendance looking to sharpen their Power BI skills. From the FTS perspective, this was the ideal audience – a healthy cross section of the modern BI reporting landscape eager to see what Power BI is capable of. The benefit of DIAD is that it can help anyone at any stage of their BI reporting journey, and this was no more evident here with the variety of attendees.

“The content was really easy to follow and I learned a lot. The presenters were excellent and know their stuff.”

From the feedback we collected, the most valuable part of DIAD is the bring-your-own-data session. This is a unique session dedicated to helping attendees solve their individual reporting problems. The FTS team were on hand to deliver one-on-one advice with every participant, with questions ranging from a technical perspective to more opinion-seeking advice. This is the best part of DIAD, and what makes this event like no other. The consulting team love a challenge, and helping the audience get the results they want is an extremely rewarding experience.

“[I liked] that we have the ability to bring our own data to build our own dashboard… Thanks FTS.”

Another unique benefit of DIAD is the showcase session. During the lunch break, attendees are invited to eat while the FTS team showcase some real-world Power BI examples. The audience gets an inside look at how we understand reporting problems and how we use Power BI to solve them. Learning about the functionality of Power BI is one thing, but seeing how it is applied is where you truly realise the benefits.

“All the coordinators were enthusiastic to resolve and address questions.”

Why Should I Go?

DIAD is an event like no other. Designed by Microsoft and delivered by professionals, DIAD empowers attendees with the skills and best practices needed to develop successful Power BI reporting solutions. Based on the overwhelming positive feedback from attendees, these events have been instrumental in getting Power BI quickly adopted in several organisations.

If you want to know more about Power BI: Dashboard In A Day events, please contact us for more information or check out our events page below for upcoming sessions.

Power BI: Dashboard In A Day

By | Data Visualisation | One Comment

Power BI: Dashboard In A Day

Over the past few weeks, FTS Data & AI have had the privilege of hosting multiple Microsoft-sponsored Power BI: Dashboard In A Day (DIAD) events across Sydney. As a Microsoft Gold Partner for Data Analytics, the team presented over 3 days’ worth of content to a combined audience of over 100 business analysts, report developers and data professionals. Handpicked to present at Microsoft’s headquarters in North Ryde, we were able to deliver events that were both informative and beneficial for all those in attendance. For those who are curious and could not attend, this blog post covers what a DIAD event is, and why you should register to attend the next one.

L-R: Alex Gorbunov, Matthew Oen, Swetha Pakki, Sahan Vaz Goonawardhane, Ajit Ananthram

What Is It?

DIAD is a free one-day course designed by Microsoft to help analysts explore the full capabilities of Power BI. Attendees learn about Power BI in detail, follow a step-by-step lab manual, attempt 2 real-world practical examples and receive expert guidance from a team of experienced instructors. Through self-paced learning, attendees can properly develop their skills in Power BI and create a business-ready dashboard in a matter of hours.

Most importantly though, a large section of the day is devoted to providing attendees with the opportunity to develop Power BI reporting from their own datasets. This is where attendees get the most value, as they can ask questions and get assistance from experienced consultants regarding their own business’s reporting projects. The FTS Data & AI team were able to lend a hand and provide tailored advice to a vast number of businesses at various stages of their Power BI reporting journey.

Presenting to a large audience at Microsoft HQ

When Is It?

DIAD events occur throughout the year. Depending on your location, you can find dates for upcoming events by contacting us.

Attendees work on real-world practical examples

Where Is It?

DIAD events are held across the country. The Sydney events this year were held at the Microsoft head office in North Ryde. Here at the Microsoft HQ, attendees were able to fully engage with the technology, and see first-hand what Power BI and other Microsoft products are capable of, and how they could be used in their business.

See the full and latest capabilities of Power BI

How Much Will It Cost?

DIAD events run for 8 hours and are FREE. Make sure you register your interest early to ensure that you reserve a seat, as spots are limited at each event. Lunch and refreshments are also complimentary and provided throughout the day.

Learn how Power BI can be implemented at your organisation

Each attendee receives expert advice

Why Should I Go?

DIAD is an event like no other. Designed by Microsoft and delivered by professionals, DIAD empowers attendees with the skills and best practices needed to develop successful Power BI reporting solutions. Based on the overwhelming positive feedback from attendees, these events have been instrumental in getting Power BI quickly adopted in several organisations.

The opportunity to receive tailored advice from professional Power BI consultants means that you can accelerate the implementation of Power BI at your business, and be confident that you have the capability to develop powerful reporting solutions at your organisation well into the future.

See how other organisations have successfully adopted Power BI

 

If you want to know more about Dashboard In A Day events, please contact us for more information.

SSAS Tabular Optimisation In 5 Easy Steps

By | Data & AI | No Comments

SSAS Tabular Optimisation

A well-designed SSAS tabular model is often the key ingredient in a successful analytics solution. It aids the business in delivering the type of ad-hoc analysis and on-the-fly insights that drives performance improvement and economic benefit.  However, not all models are well-designed. Indeed, a poor performing cube can quickly become a burden for any organisation, negatively impacting quality of analysis and becoming a drain on valuable business resources. That’s why SSAS Tabular optimisation is so crucial for businesses wanting to get the most value out of their analytics solution.

Recently, I consulted for a large electrical merchandising business who were having some trouble with their SSAS tabular models. With national operations, it was imperative that their cubes could rapidly and reliably deliver the analysis needed for the business to confidently make strategic decisions around sales, purchasing and inventory. Memory issues and ambiguous design principles were proving to be a challenge in getting the tabular model to behave, and it was clear that I needed to tune the existing cubes with some simple optimisation techniques.

When attempting SSAS Tabular optimisation, I employ a straight-forward 5-step strategy:

  1. Examine the model and remove unnecessary tables
  2. Examine the tables, remove unnecessary columns and edit table structure/content
  3. Examine the columns and change data types
  4. Examine the DAX measures and edit expressions
  5. Examine server properties and edit memory settings

This 5-step performance tuning approach guarantees that tabular model issues can be precisely identified and appropriately addressed.

1.      Examine the Model

A concise tabular model is one that performs best. Therefore, the first step is to review the model itself. Very often a poor-performing cube contains unnecessary tables or relationships that provide no real value. A thorough review of what tables are present in the model and what value they bring will uncover what is necessary and what is redundant. Talking to stakeholders about what they need will also help determine what tables should go and what needs to stay. In my example, I was able to reduce the cube size by removing unnecessary dimension tables that I discovered the business was no longer interested in. This redesign process typically yields ‘quick-and-easy’ wins in terms of cube performance, as it is the easiest to implement.

Figure 1. Removing unnecessary tables reduces SSAS tabular model complexity

 

2.      Examine the Tables

What data actually goes into the tables will ultimately determine the quality of the tabular model. Similar to the first step, a review of the tables will often uncover unnecessary columns that do not need to be loaded into the model. For example, columns that are never filtered on or contain largely null values. Table structure is also important to tabular model performance, as it can affect how much data needs to be loaded. For example, you could reduce the row count of the sales fact table by aggregating it to be at the invoice level, instead of invoice line level. Such a reduction in size will mean that less memory is required by the cube.

Figure 2. Tidy up tables by removing columns, and reducing rows

 

3.      Examine the columns

A crucial aspect of cube performance is compression. Columns with certain data types, or have unique values for all rows will compress badly, and will require more memory. An effective optimisation technique is to correct the data type or value in a column, such that it is able to compress better. Casting values as integers instead of strings or defining decimal points are fundamental practices that are often overlooked in tabular model design, and ultimately come at the expense of performance. In my example, I was able to create a new unique invoice ID that could be used by the business and compressed as an integer. Previously the varchar invoice key was unique at almost every row of the sales table, and was compressing very poorly. The storage engine (Vertipaq) wants to compress columns, and having similar values in the same column greatly aids this. A great tool for this kind of analysis is the Vertipaq Analyzer. This tool can highlight potential areas of interest in compression activities and help track results in terms of cube optimisation techniques.

Figure 3. The VertiPaq Analyzer reveals compression pain points

 

4.      Examine the DAX

For cube users, it is critical that the OLAP queries they run return accurate results rapidly. If a user cannot get the information they need from a model in a reliable or timely manner, the cube is failing to provide the benefits expected of it. Therefore, an important part of tabular model optimisation revolves around the measures, and ensuring that the DAX expressions used are performance optimised for the formula engine. Keeping the measures simple by using basic expressions, and removing complicated filtering clauses means that the measures should perform better. In my example, I was able to change some of the expressions of sales measures at different period intervals (such as month-to-date and year-to-date), such that they could run across different filtering contexts, thus reducing calculation time.

Figure 4. Simple DAX equals better performance

 

5.      Examine the Server

Finally, the biggest factor in tabular model processing performance is the actual memory properties. Depending on the edition of the Analysis Services, there are various levels of memory limits. For the Standard Edition, the 16Gb limit imposed on a single instance of Analysis Services can often be the ‘killer of cubes’. If a reasonable business case exists, then moving to the Enterprise Edition or cloud-based solution can be the right answer to memory woes. However, there are steps that can be taken to get the best out of a SSAS tabular model without abandoning Standard Edition altogether. Increasing the amount of RAM on the server and modifying the server instance memory properties allows you to fine tune processing and reduce the likelihood of memory exception errors. In my example, the cube was failing to process as it would run out of memory during a Full Process. I increased the RAM from 32Gb to 40Gb, and reduced the Total Memory Limits in the server instance properties. With more memory and lower thresholds to which memory cleaner processes were initiated, the cube was able to process in full each time without error.

Figure 5. Fine tune the memory limits to find the optimal level of performance

 

Summary

Like any business asset, a SSAS tabular model loses value when it is not properly configured or utilised. However, with the proper approach methodology, any model can be transformed from an underperforming asset into a valuable resource for a business.

 

If you’re having trouble with SSAS tabular optimisation, we want to hear about it! Please contact us to find out about how we can help you optimise your cubes.