If you’ve ever worked with large volumes of business data and wished your reports would run faster — you’re not alone.
In many enterprises, reports start off as SQL queries over transactional databases. These work fine at small scale. But as data grows, things slow down. And that’s where SSAS cubes come into play.
Table of contents
Open Table of contents
- 📊 What is an SSAS Cube and Why Is It Used?
- 🧹 Moving Beyond SSAS Cubes: Two Modern Alternatives
- 🧠 Rebuilding the Cube in Power BI / Microsoft Fabric
- 🛠 Step 1: Open Power BI and Connect to the Lakehouse
- 🔗 Step 2: Define Relationships Between Tables
- 🧮 Step 3: Rewrite Calculations in DAX
- 🧹 Step 4: Clean Up the Model (Hide Unwanted Elements)
- 📤 Step 5: Publish Your Model
- 📊 Using the Semantic Model in Excel — Just Like a Cube
- 🧠 Final Thoughts
📊 What is an SSAS Cube and Why Is It Used?
SSAS stands for SQL Server Analysis Services — a Microsoft platform for building analytical data models.
It’s part of the Microsoft Business Intelligence stack, alongside tools like SQL Server, SSIS, SSRS, and now Power BI.
🧊 What is a Cube?
At its core, an SSAS cube is a special data structure designed for fast, multidimensional analysis.
Imagine being able to:
-
View sales by product category, region, and year
-
Slice the data by customer type or marketing channel
-
Drill down from yearly trends to daily transactions
And do all of that instantly, without hammering your database. That’s the power of a cube.
🔍 Why Use a Cube?
Here’s why businesses use SSAS cubes:
Benefit | Explanation |
---|---|
🚀 Performance | Cubes pre-aggregate data — making reports blazing fast |
🔁 Slicing & Dicing | You can easily pivot and filter data by dimensions like time, region, or product |
📐 Reusable Model | One central model can be used across Excel, Power BI, and other tools |
🧩 Calculated KPIs | You can define advanced metrics using MDX or DAX |
🔐 Security | Role-based access to control what each user can see |
🗂️ What’s Inside a Cube?
A typical SSAS cube includes:
-
Dimensions – e.g., Time, Product, Customer
-
Measures – e.g., Sales Amount, Quantity Sold
-
Hierarchies – e.g., Year → Quarter → Month
-
KPIs – e.g., Profit Margin, YoY Growth
All these elements come together to enable interactive, multi-angle analysis.
🧹 Moving Beyond SSAS Cubes: Two Modern Alternatives
While SSAS cubes have served us well for years, modern data platforms offer more flexible, scalable, and cloud-friendly options. If you’re thinking of phasing out your existing SSAS cube infrastructure, you’re not alone.
How do I move away from SSAS without losing the performance and modeling benefits?
In fact, I’ve explored two practical paths, both of which depend on how you want to treat your data.
Let’s dive into them.
☁️ Option 1: Keep the Data On-Prem, Move the Model to the Cloud
This is ideal when:
-
Your data size is
moderate
-
The datasets are
already cleaned and structured
-
You want to modernize
just the reporting layer
without a full data migration
You can create a semantic model in the cloud
(via Power BI or Fabric) and keep the actual data on-premises. This approach gives you a modern, self-service BI layer — without changing where your data resides.
🧠
Example use case
: You have a few fact tables, each under a few million rows, and users mostly consume summary-level dashboards. Keeping the data on-prem while upgrading the semantic layer is perfectly viable here.
🪵 Option 2: Move the Data to Fabric Lakehouse and Build Natively
This is the better (and often the only) choice when:
-
Your data is
massive
— thinktens or hundreds of billions of rows
-
Performance is critical
-
You want to scale out for
real-time or near-real-time reporting
-
Your organization is already transitioning to a cloud-first architecture
In such cases, pushing data through a gateway to the cloud isn’t practical. Instead, you move the underlying data to a Fabric Lakehouse, and build your semantic model natively on top of it using Direct Lake or import mode
— both designed for large-scale, modern workloads.
🚀
Example use case
: You’re dealing with telemetry logs, event streams, or sales fact tables with100+ billion rows
. You need cloud-native compute and storage to handle this scale efficiently.
🧠 Rebuilding the Cube in Power BI / Microsoft Fabric
Once you’ve decided to move away from SSAS cubes, the next step is to rebuild your analytical model using modern tools — specifically, Power BI and Microsoft Fabric.
This is where your new cube comes to life — but with better performance, flexibility, and compatibility.
Let’s walk through the process in detail.
🛠 Step 1: Open Power BI and Connect to the Lakehouse
Before you start building your new semantic model, the first step is to connect Power BI to your data. But how you connect and which options are available depends entirely on where your data currently resides
.
🖥️ A. If Your Data Is Still On-Premises
If your data hasn’t yet moved to the cloud and still lives in SQL Server or other on-prem databases
, Power BI gives you two main connection options:
✅ Import
This is the most common and recommended option. Power BI pulls the data into its in-memory engine (VertiPaq), making reports fast and responsive.
-
Best for
smaller to medium datasets
-
Allows full modeling features
-
Refreshes are scheduled through the
on-premises data gateway
🔁 DirectQuery
In this mode, Power BI leaves the data on-prem
and runs queries in real time against your source system.
-
No data is stored in Power BI
-
Best for
real-time or near-real-time dashboards
-
⚠️ Be careful with performance — query latency, SQL Server load, and data modeling limitations can impact user experience
💡 If you’re staying on-prem,
Import mode
is typically the safest and most performant choice for most BI workloads.
☁️ B. If Your Data Is in Fabric Lakehouse
If you’ve already moved your data to Microsoft Fabric’s Lakehouse
, you now have three connectivity options in Power BI:
✅ Import
Same as on-prem, this loads the data into Power BI’s in-memory engine. Offers great performance and full modeling capabilities.
🔁 DirectQuery
Leaves data in-place and queries the Lakehouse tables directly on every interaction.
Suitable for large, constantly updated datasets
, but comes with limitations in performance and certain DAX features.
⚡ Direct Lake
This is Fabric’s newest and most powerful option — combining the performance of Import with the freshness of DirectQuery
.
-
Data remains in the Lakehouse
-
Power BI reads parquet files directly via OneLake
-
No need for scheduled refresh
-
Near real-time analytics at Import-level speed
✅ Direct Lake is the
ideal choice
— but only if your dataset fits within the size limits.
⚠️ Direct Lake Limitations by Capacity
Fabric Capacity | Max Direct Lake Model Size |
---|---|
P1 / F64 | Up to 25 GB |
P2 / F96 | Up to 50 GB |
P3 / F128 | Up to 100 GB |
🧠 If your model exceeds this size, Direct Lake won’t work. You’ll need to fall back to Import or DirectQuery
, depending on your performance and refresh needs.
🔗 Step 2: Define Relationships Between Tables
Once your tables are loaded or connected:
-
Go to the
Model View
in Power BI. -
Start creating
relationships
between your fact and dimension tables.
Power BI supports only one active relationship between two tables
. If your original SSAS cube had multiple relationships between the same two tables
, you need to:
🔁 Create duplicate dimension tables.
This is a common design workaround:
For example, if your
Sales
table has two relationships to theDate
table (Order Date and Ship Date), you’ll duplicate theDate
table:
Date_Order
Date_Ship
This ensures each has a single, clear path
to the fact table — avoiding ambiguous model behavior.
🧮 Step 3: Rewrite Calculations in DAX
Your SSAS cube might include:
-
MDX-based KPIs
-
Calculated measures like
Total Sales
,Profit Margin
, orYOY Growth
-
Derived columns in dimensions
These all need to be rewritten in DAX (Data Analysis Expressions)
— Power BI’s native formula language.
You’ll replace things like:
SUM([Measures].[Sales])
With DAX equivalents like:
Total Sales := SUM(Sales[Amount])
🧠 DAX is powerful, but different from MDX. Expect a learning curve if you’re coming from a cube-heavy world.
Make sure to validate your logic and test the numbers thoroughly after conversion.
🧹 Step 4: Clean Up the Model (Hide Unwanted Elements)
Once the relationships and measures are in place, it’s time to clean up the model for end-users:
-
Hide
primary key and foreign key columns
-
Hide
technical columns
not meant for reporting -
Optionally hide
entire tables
if they’re only used for lookup or calculations
This makes the report-building experience cleaner and avoids user confusion.
✅ A clean semantic model = a better user experience
📤 Step 5: Publish Your Model
Once your new semantic model is ready:
Save and publish the .pbix
file to a Fabric workspace
(or Power BI Service).
Assign it to a Premium or Fabric capacity
to enable features like:
-
Direct Lake
-
Scheduled refresh
-
Row-level security
-
Gateway
You now have a modern alternative to SSAS cubes
, powered by Fabric, with the ability to scale and evolve with your needs.
📊 Using the Semantic Model in Excel — Just Like a Cube
One of the best parts of moving to Fabric is that you don’t lose the familiar Excel experience
. In fact, working with a Fabric-based semantic model in Excel feels almost identical to connecting to an SSAS cube
Here’s how it works:
✅ Step-by-Step: Analyze in Excel from Fabric
-
Once you’ve published your semantic model to a
Fabric workspace
, open the workspace in thePower BI / Fabric portal
. -
Locate your
semantic model
in the workspace. -
Click the model to open its details page.
-
At the top of the page, click on the
“More options” (···)
or theExport
dropdown, then select“Analyze in Excel”
. -
Excel will automatically open (or download a .odc connection file). You’ll be connected directly to the semantic model — no need to manually configure data sources or connections.
🧠 Just Like Using a Cube
From here, you’ll be able to:
-
Drag and drop fields into a
Pivot Table
-
Slice by dimensions like Date, Region, Product
-
Use calculated measures, KPIs, and hierarchies defined in your model
🧠 Final Thoughts
Rebuilding a cube might seem like a big task — but with the right modeling, DAX, and governance practices, you’re setting yourself up for a more agile, scalable, and cloud-native analytics platform.
In the next post, I’ll show you how to automate Creating partitions and refreshing them, when dealing huge amount of data on import mode