Skip to content
Go back

SSAS Cube migration into fabric semantic model

If you’ve ever worked with large volumes of business data and wished your reports would run faster — you’re not alone.

In many enterprises, reports start off as SQL queries over transactional databases. These work fine at small scale. But as data grows, things slow down. And that’s where SSAS cubes come into play.

Table of contents

Open Table of contents

📊 What is an SSAS Cube and Why Is It Used?

SSAS stands for SQL Server Analysis Services — a Microsoft platform for building analytical data models.

It’s part of the Microsoft Business Intelligence stack, alongside tools like SQL Server, SSIS, SSRS, and now Power BI.

🧊 What is a Cube?

At its core, an SSAS cube is a special data structure designed for fast, multidimensional analysis.

Imagine being able to:

And do all of that instantly, without hammering your database. That’s the power of a cube.

🔍 Why Use a Cube?

Here’s why businesses use SSAS cubes:

BenefitExplanation
🚀 PerformanceCubes pre-aggregate data — making reports blazing fast
🔁 Slicing & DicingYou can easily pivot and filter data by dimensions like time, region, or product
📐 Reusable ModelOne central model can be used across Excel, Power BI, and other tools
🧩 Calculated KPIsYou can define advanced metrics using MDX or DAX
🔐 SecurityRole-based access to control what each user can see

🗂️ What’s Inside a Cube?

A typical SSAS cube includes:

All these elements come together to enable interactive, multi-angle analysis.


🧹 Moving Beyond SSAS Cubes: Two Modern Alternatives

While SSAS cubes have served us well for years, modern data platforms offer more flexible, scalable, and cloud-friendly options. If you’re thinking of phasing out your existing SSAS cube infrastructure, you’re not alone.

How do I move away from SSAS without losing the performance and modeling benefits?

In fact, I’ve explored two practical paths, both of which depend on how you want to treat your data.

Let’s dive into them.

☁️ Option 1: Keep the Data On-Prem, Move the Model to the Cloud

This is ideal when:

You can create a semantic model in the cloud (via Power BI or Fabric) and keep the actual data on-premises. This approach gives you a modern, self-service BI layer — without changing where your data resides.

🧠 Example use case: You have a few fact tables, each under a few million rows, and users mostly consume summary-level dashboards. Keeping the data on-prem while upgrading the semantic layer is perfectly viable here.

🪵 Option 2: Move the Data to Fabric Lakehouse and Build Natively

This is the better (and often the only) choice when:

In such cases, pushing data through a gateway to the cloud isn’t practical. Instead, you move the underlying data to a Fabric Lakehouse, and build your semantic model natively on top of it using Direct Lake or import mode — both designed for large-scale, modern workloads.

🚀 Example use case: You’re dealing with telemetry logs, event streams, or sales fact tables with 100+ billion rows. You need cloud-native compute and storage to handle this scale efficiently.


🧠 Rebuilding the Cube in Power BI / Microsoft Fabric

Once you’ve decided to move away from SSAS cubes, the next step is to rebuild your analytical model using modern tools — specifically, Power BI and Microsoft Fabric.

This is where your new cube comes to life — but with better performance, flexibility, and compatibility.

Let’s walk through the process in detail.


🛠 Step 1: Open Power BI and Connect to the Lakehouse

Before you start building your new semantic model, the first step is to connect Power BI to your data. But how you connect and which options are available depends entirely on where your data currently resides.

🖥️ A. If Your Data Is Still On-Premises If your data hasn’t yet moved to the cloud and still lives in SQL Server or other on-prem databases, Power BI gives you two main connection options:

✅ Import

This is the most common and recommended option. Power BI pulls the data into its in-memory engine (VertiPaq), making reports fast and responsive.

🔁 DirectQuery

In this mode, Power BI leaves the data on-prem and runs queries in real time against your source system.

💡 If you’re staying on-prem, Import mode is typically the safest and most performant choice for most BI workloads.

☁️ B. If Your Data Is in Fabric Lakehouse

If you’ve already moved your data to Microsoft Fabric’s Lakehouse, you now have three connectivity options in Power BI:

✅ Import

Same as on-prem, this loads the data into Power BI’s in-memory engine. Offers great performance and full modeling capabilities.

🔁 DirectQuery

Leaves data in-place and queries the Lakehouse tables directly on every interaction. Suitable for large, constantly updated datasets, but comes with limitations in performance and certain DAX features.

⚡ Direct Lake

This is Fabric’s newest and most powerful option — combining the performance of Import with the freshness of DirectQuery.

✅ Direct Lake is the ideal choice — but only if your dataset fits within the size limits.

⚠️ Direct Lake Limitations by Capacity

Fabric CapacityMax Direct Lake Model Size
P1 / F64Up to 25 GB
P2 / F96Up to 50 GB
P3 / F128Up to 100 GB

🧠 If your model exceeds this size, Direct Lake won’t work. You’ll need to fall back to Import or DirectQuery, depending on your performance and refresh needs.


🔗 Step 2: Define Relationships Between Tables

Once your tables are loaded or connected:

Power BI supports only one active relationship between two tables. If your original SSAS cube had multiple relationships between the same two tables, you need to:

🔁 Create duplicate dimension tables.

This is a common design workaround:

For example, if your Sales table has two relationships to the Date table (Order Date and Ship Date), you’ll duplicate the Date table:

Date_Order

Date_Ship

This ensures each has a single, clear path to the fact table — avoiding ambiguous model behavior.


🧮 Step 3: Rewrite Calculations in DAX

Your SSAS cube might include:

These all need to be rewritten in DAX (Data Analysis Expressions) — Power BI’s native formula language.

You’ll replace things like:


SUM([Measures].[Sales])

With DAX equivalents like:


Total Sales := SUM(Sales[Amount])

🧠 DAX is powerful, but different from MDX. Expect a learning curve if you’re coming from a cube-heavy world.

Make sure to validate your logic and test the numbers thoroughly after conversion.


🧹 Step 4: Clean Up the Model (Hide Unwanted Elements)

Once the relationships and measures are in place, it’s time to clean up the model for end-users:

This makes the report-building experience cleaner and avoids user confusion.

✅ A clean semantic model = a better user experience


📤 Step 5: Publish Your Model

Once your new semantic model is ready:

Save and publish the .pbix file to a Fabric workspace (or Power BI Service).

Assign it to a Premium or Fabric capacity to enable features like:

You now have a modern alternative to SSAS cubes, powered by Fabric, with the ability to scale and evolve with your needs.


📊 Using the Semantic Model in Excel — Just Like a Cube

One of the best parts of moving to Fabric is that you don’t lose the familiar Excel experience. In fact, working with a Fabric-based semantic model in Excel feels almost identical to connecting to an SSAS cube

Here’s how it works:

✅ Step-by-Step: Analyze in Excel from Fabric

🧠 Just Like Using a Cube

From here, you’ll be able to:


🧠 Final Thoughts

Rebuilding a cube might seem like a big task — but with the right modeling, DAX, and governance practices, you’re setting yourself up for a more agile, scalable, and cloud-native analytics platform.

In the next post, I’ll show you how to automate Creating partitions and refreshing them, when dealing huge amount of data on import mode


Share this post on:

Previous Post
Git Commands
Next Post
My first blog