HANA News Blog

Data Tiering case study part II: Inverted Individual Indexes

Jens Gleichmann • 6. April 2026

Inv Idv Idx - Inverted Individual Indexes - warm data

Inv Idv Idx - Inverted Individual Indexes - warm data

An inverted individual index is a special type of SAP HANA index which can only cover unique indexes. This includes all primary key structures. Unlike traditional column store indexes on more than one column (inverted value, inverted hash) an inverted individual index (Inv Idv Idx) doesn't need a dedicated concat attribute.

Why Inv Idv Idx are relevant for data tiering purposes?

  • Significantly reduced memory footprint due to absence of concat attribute
  • Less I/O in terms of data and redo logs
  • Reduced table optimization efforts
  • DML change operations can be faster due to reduced concat attribute maintenance

 

What influence can such indexes have on the sizing?

40-50% of S/4HANA systems can consists of indexes. Mostly of the indexes are primary keys, but there are also some secondary indexes which are in the most cases not unique indexes. This means indexes have an overall impact of 30-40% on the sizing. It depends on the size of the unique indexes. This can vary for each system.

 

Why is it not sensible to convert every unique index?

It is possible to convert every unique index but as always it is a trade-off between savings and performance. It depends on the mixture of filter of the critical SQL workload. If the filters are part of the index and one of the columns is also very selective, it might makes no difference in terms of performance.

 

How much memory can be saved with Inv Idv Idx?

It depends on each system, the workload and the distinct values of the column of the unique indexes. But as average we observed 11% savings in the payload.

The smallest saving was 5% and the peak saving 15% of the CS payload.

In total we could save round about 1.2TB memory out of 8.9TB which is percentage 13.5%.

For which systems the Inverted Individual Indexes can be used?

They are supported in any HANA system. The definition is possible in SE11. For BW/4HANA systems they are available in BW designer via the name “Memory-optimized Primary Key Structure”.

 

The good aspect on this part of data tiering is that it can be analyzed and implemented quite fast. But it must be tested well as every implementation.

 

If you are interested in more details, you can have a look at the general SAP Note 2600076 or our book “HANA deep dive” which will be released in a few weeks (it is already available in German).

 

Summary

Both data tiering methods, NSE and Inv Idv Idx, can cut your sizing more than half. In our customer case we observed payload savings of 42% and 69%. This means it is depending on the performance. Within the current times the needed resources can be quite expensive. These two aspects of data tiering are the most weighted once. Comparing the costs of a data tiering projects and the savings it is fast ROI calculation. The point in time of such a project can be quite important. For instance, the procurement of new hardware or the migration to the cloud or beginning a RISE journey. It can save immense costs if such a project can cut the sizing before such contracts are signed.

SAP HANA News by XLC

Total NSE savings
von Jens Gleichmann 5. April 2026
This blog presents the results of a customer case which can be used as a reference for other implementations
SAP HANA performance issues with THP on multi-NUMA node systems
von Jens Gleichmann 30. März 2026
SAP HANA systems may experience high swap usage, hangs, or performance issues when THP (Transparent Huge Pages) is enabled with "madvise". This occurs on multi-NUMA node systems where one or more NUMA nodes are close to full memory usage while others have plenty of free memory, and counters for THP allocations, direct
Transparent Huge Pages (THP) with madvise can trigger high swap usage and performance issues
von Matthias Sander 26. Januar 2026
Transparent Huge Pages (THP) with madvise can trigger high swap usage, performance issues, or even system hangs on multi-NUMA systems.
HANA performance degradation after upgrade to SPS07+SPS08
von Jens Gleichmann 9. Januar 2026
With SPS06 and even stronger in SPS07 the HEX engine was pushed to be used more often. This results on the one hand side in easy scenario to perfect results with lower memory and CPU consumption ending up in faster response times. But in scenarios with FAE (for all entries) together with FDA (fast data access), it can result in bad performance. After some customers upgraded their first systems to SPS07 I recommended to wait for Rev. 73/74. But some started early with Rev. 71/72 and we had to troubleshoot many statement. If you have similar performance issues after the upgrade to SPS07 feel free to contact us! Our current recommendation is to use Rev. 74 with some workarounds. The performance degradation is extreme in systems like EWM and BW with high analytical workload.
SAP HANA NSE - a technical deepdive with Q&A
von Jens Gleichmann 24. November 2025
SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI's to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. In this blog we will talk about the technical aspects in more detail.
Partitioning process
von Jens Gleichmann 24. November 2025
SAP HANA scaling and tuning with proper partitioning designs
R+R: intersection and missing services
von Jens Gleichmann 25. September 2025
In transformation projects like SAP RISE / SAP Cloud ERP, the Roles & Responsibilities (R+R) list often serves as the backbone for collaboration between customer, partner, and SAP. Yet too often, this list is treated as a static document rather than a living framework. Sometimes nobody knows exactly what was defined by
Why Databases Need Optimizer Statistics – With a Focus on SAP HANA
von Jens Gleichmann 28. Mai 2025
In the world of modern database management systems, query performance is not just a matter of hardware—it’s about smart execution plans. At the core of generating these plans lies a critical component: optimizer statistics. This article explores why databases need optimizer statistics, with particular emphasis on SAP HANA, while drawing parallels with Oracle, Microsoft SQL Server (MSSQL), and IBM DB2.
HANA OS maintenance
von Jens Gleichmann 27. Mai 2025
Please notice that when you want to run HANA 2.0 SPS07, you need defined OS levels. As you can see RHEL7 and SLES12 are not certified for SPS07. The SPS07 release of HANA is the basis for the S/4HANA release 2023 which is my recommended go-to release for the next years. Keep in mind that you have to go to SPS07 when you are running SPS06 because it will run out of maintenance end of 2023.
The HANA Optimizer is one of the core components of the HANA database. It determines how SQL is exec
von Jens Gleichmann 25. Mai 2025
A database optimizer behaves similarly to the navigation system in your car. You use various parameters to determine which route you want to take to reach a particular destination. Potential additional costs for tolls and ferries also play a role, as do other limitations such as the vehicle's height, length, and width. From these input parameters, the navigation system determines the optimal route based on current traffic information (traffic volume, construction sites, congestion, accidents, closures, etc.), weather conditions, and the length of the route. This means that with exactly identical input parameters, different routes, costs, and thus different travel times can arise depending on the given conditions.
more