HANA News Blog

Golden Rules for HANA Partitioning

Jens Gleichmann • 26. November 2024
SAP HANA Partitioning process

Partitioning is not only useful to get rid of the record limitation alerts ID 17 ("Record count of non-partitioned column-store tables"), 

20 ("Table growth of non-partitioned column-store tables") or 27 ("Record count of column-store table partitions") it can improve performance of SQLs, 

startup, HSR, data migration and recovery. It is also useful to have a proper design to make use of NSE with paging out range partitions which are not frequently used.

I have also seen quite many of wrong designs like using multiple columns as partition attribute, too big or too small partitions, partitions with too many empty partitions. It can also happen that a design which is currently totally correct is not scalable and has bad performance due to massive change rate (increase due to growth as well as decrease due to archiving). This means you have to monitor the SQLs and the change rate and rate your partition design after 8-12 months again. Also due too new features (dyn. range partitioning / dynamic aging: threshold, interval + distance) it can make sense to redesign the partitioning. It is a moving target and like building a house - it will never be 100% complete and fitting your requirements, because the requirements will change over time.

Overall it is a very complex topic which should only handled by experts. A repartitioning can be really painful due to long downtimes (offline / standard partitioning ) or impacting the business due to long runtimes and resource consumption (online partitioning).

Rule of thumbs

Rule of thumbs for the initial partitioning:

  • min. 100 mio entries per partition for initial partitioning
  • max. 500 mio entries per partition for initial partitioning
  • if you choose too many partitions you can achieve a bad performance, because one thread per partition have to be triggered (e.g. you have 20 threads as statement concurrency limit and 40 partition have to be scanned which results in waiting for resources)
  • if you choose too less partitions, it can be that you have to repartition pretty timely which means another maintenance window / downtime 
  • recommendation: HASH partitioning on a selective column, typically part of primary key
  • making sure that a single partition doesn't exceed a size of about 25 to 30 GB due to delta merge performance (SAP note 2100010
  • not too many empty partitions


Some of this rules can be checked by using the mini checks of SAP note 1969700.

For every rule exists a exception. For example ACDOCA can have (due to its number of fields which change over time depending on the S/4HANA release) a partition size up to 100GB (SAP note 2044468). You will be charged in case of a delta merge, but if pruning is not possible is does make sense not to split this partition. This depends on the filters with the parallel threads required, as well as the number of parallel executions and business criticality.

It always depends on the usage of your system. There is no general design which is valid for every customer. You always need a detailed analyses and also this has to be done frequently because the workload and also the SQL (incl. filters) can change over time.


HASH 

Use HASH partitioning if you have not a number, id or interval which is constantly increasing and used by the SQLs where clauses. If you are not aware of

the change rate and distribution, you can use HASH partitioning as a safe harbor. But use the number of partitions wisely! You can not easily add

partitions like for RANGE partitioning. The scalability is limited. A repartitioning can be expensive! Create the number of partitions wisely not too many and not too less. 


RANGE

Use RANGE partitioning if you have a time attribute or a change number. This means a integer value which is constantly increasing and used by the SQLs 

where clauses. 

Typical date/time fields (in ABAP dictionary the data type is another one compared to HANA DB):

GJAHR (4 chars, ABAP DDIC: NUMC, HANA : NVARCHAR)

UDATE (8 chars, ABAP DDIC: DATS, HANA : NVARCHAR)

TIMESTAMP (21 chars, ABAP DDIC: DEC, HANA : DECIMAL)


Typical other integer fields which can be used for RANGE partitioning (sure there are more):

KNUMV

CHANGENR


This means not that every table with such columns should be partitioned by this attributes. It depends on the distribution and selectivity. Every system is different and there is no silver bullet.

The advantage of RANGE partitioning is, that you can add new partitions within milliseconds without disturbing the business operations. It means RANGE partitioning is the smarter partitioning option. You can also rebalance the design by merging or splitting partitions. During the partitioning process only the defined and affected ranges will be touched. This allows you to redesign the partitioning without big downtimes. This applies to standard/offline partitioning. For online partitioning always the complete table has to be touched!

RANGE partitioning includes also in all normal cases a OTHERS partition which all data will be stored which has no valid range. There are some features regarding dynamic range options which will be handled in an own article.

Multilevel partitioning

If your system includes huge tables it might be wise to use more than one attribute. One on the first level and one on the second level. This depends on the affected table and columns if it makes sense or not. There are only rare scenarios where it makes sense to combine multiple attributes on one level.


Designing Partitions

Actually the online repartitioning is based on table replication. Tables with the naming convention _SYS_OMR_<source_table>#<id> are used as interim 

tables during online repartitioning operations. For details please read the “Designing Partitions” section in the documentation.


Summary:

  1.   Use partitioning columns that are often used in WHERE clauses for partition pruning
  2.   If you don’t know which partition scheme to use, start with hash partitioning
  3.   Use as many columns in the hash partitioning as required for good load balancing, but try to use only those columns that are typically used in requests
  4.   Queries do not necessarily become faster when smaller partitions are searched. Often queries make use of indexes and the table or partition size is 
  5.   not significant. If the search criterion is not selective though, partition size does matter.
  6.   Using time-based partitioning often involves the use of hash-range partitioning with range on a date column
  7.   If you split an index (SAP names the CS tables also as index), always use a multiple of the source parts (for example 2 to 4 partitions). This way the
  8.   split will be executed in parallel mode and also does not require parts to be moved to a single server first.
  9.   Do not split/merge a table unless necessary.
  10.   Ideally tables have a time criterion in the primary key. This can then be used for time-based partitioning.
  11.   Single level partitioning limitation: the limitation of only being able to use key columns as partitioning columns (homogeneous partitioning)
  12.   the client (MANDT/MANDANT) as single attribute for partitioning is not recommended - only useful in multi level partitioning scenarios with real multi clients environments


In the end if you want be on the safe site, just contact us. We will find a scalable design and may improve performance or also find a NSE design for your tables. It is a complex topic on for a proper design you need deep knowledge in SQL performance tuning, partitioning options and HANA is working in general. In the near future our new book will be released with all details regarding partitioning.


SAP HANA News by XLC

Why Databases Need Optimizer Statistics – With a Focus on SAP HANA
von Jens Gleichmann 28. Mai 2025
In the world of modern database management systems, query performance is not just a matter of hardware—it’s about smart execution plans. At the core of generating these plans lies a critical component: optimizer statistics. This article explores why databases need optimizer statistics, with particular emphasis on SAP HANA, while drawing parallels with Oracle, Microsoft SQL Server (MSSQL), and IBM DB2.
HANA OS maintenance
von Jens Gleichmann 27. Mai 2025
Please notice that when you want to run HANA 2.0 SPS07, you need defined OS levels. As you can see RHEL7 and SLES12 are not certified for SPS07. The SPS07 release of HANA is the basis for the S/4HANA release 2023 which is my recommended go-to release for the next years. Keep in mind that you have to go to SPS07 when you are running SPS06 because it will run out of maintenance end of 2023.
HANA performance degradation after upgrade to SPS07+SPS08
von Jens Gleichmann 27. Mai 2025
With SPS06 and even stronger in SPS07 the HEX engine was pushed to be used more often. This results on the one hand side in easy scenario to perfect results with lower memory and CPU consumption ending up in faster response times. But in scenarios with FAE (for all entries) together with FDA (fast data access), it can result in bad performance. After some customers upgraded their first systems to SPS07 I recommended to wait for Rev. 73/74. But some started early with Rev. 71/72 and we had to troubleshoot many statement. If you have similar performance issues after the upgrade to SPS07 feel free to contact us! Our current recommendation is to use Rev. 74 with some workarounds. The performance degradation is extreme in systems like EWM and BW with high analytical workload.
The HANA Optimizer is one of the core components of the HANA database. It determines how SQL is exec
von Jens Gleichmann 25. Mai 2025
A database optimizer behaves similarly to the navigation system in your car. You use various parameters to determine which route you want to take to reach a particular destination. Potential additional costs for tolls and ferries also play a role, as do other limitations such as the vehicle's height, length, and width. From these input parameters, the navigation system determines the optimal route based on current traffic information (traffic volume, construction sites, congestion, accidents, closures, etc.), weather conditions, and the length of the route. This means that with exactly identical input parameters, different routes, costs, and thus different travel times can arise depending on the given conditions.
Is NSE worth the effort or is the better question: Do you know your cost per GB of RAM?
von Jens Gleichmann 18. April 2025
Most of our presentations on data tiering projects end with these typical questions: How much we will save? How fast can it be implemented? Is the effort worth it over time? My counter question: "Do you know how much 1 GB of memory costs your company per month or year?" => how much memory we have to save to be beneficial?
Buch: SAP HANA Deepdive
von Jens Gleichmann und Matthias Sander 30. März 2025
Unser erster Buch mit dem Titel "SAP HANA Deepdive: Optimierung und Stabilität im Betrieb" ist erschienen.
More time to switch from BSoH to S/4HANA
von Jens Gleichmann 7. Februar 2025
Recently handelsblatt released an article with a new SAP RISE option called SAP ERP, private edition, transition option. This option includes a extended maintenance until the end 2033. This means 3 years more compared to the original on-prem extended maintenance. This statement was confirmed by SAP on request of handelsblatt, but customers receive more details, such as the price, in the first half of the year. This is a quite unusual move of SAP without any official statement on the news page. Just to raise more doubts? Strategy? However a good move against the critics and the ever shorter timeline. Perhaps it is also a consequence of the growing shortage of experts for operating and migrating the large number of systems.
Optimize your SAP HANA with NSE
von Matthias Sander 15. Januar 2025
When it comes to optimizing SAP HANA, the balance between performance and cost efficiency is critical. I am happy to share a success story where we used the Native Storage Extension (NSE) to significantly optimize memory usage while being able to adjust the sizing at the end. The Challenge: Our client was operating on a 4 TB memory SAP HANA system, where increasing data loads were driving up costs and memory usage. They needed a solution to right-size their system without compromising performance or scalability. The client wanted to use less hardware in the future. The Solution: We implemented NSE to offload less frequently accessed data from memory. The activation was customized based on table usage patterns: 6 tables fully transitioned to NSE 1 table partially transitioned (single partition) 1 table transitioned by specific columns
SAP HANA NSE - a technical deepdive with Q&A
von Jens Gleichmann 6. Januar 2025
SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI's to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. In this blog we will talk about the technical aspects in more detail.
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery
von Jens Gleichmann 5. Januar 2025
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery engagement model dedicated to manage service delivery for RISE with SAP S/4HANA Cloud, private edition customers.
more