HANA News Blog

HANA systems: Linux swappiness

Jens Gleichmann • 24. März 2024

Page Priority

################################

German version (scroll down for English)

################################

Seit SLES15 SP4 und RHEL8 (auch bei anderen Linux Distributionen) hat sich das Verhalten beim Reclaim geändert (Details)

Vorher: vm.swappiness Default 60 [Range: 0 - 100]

Nachher: vm.swappiness Default 60 [Range: 0 - 200]


2 Arten von Pages:

  • anonymous page: dynamische Laufzeitdaten wie stack und heap
  • FS (Filesystem) page: Payload wie Applikationsdaten, shared libs etc.


Mit einem Wert von 100 wären damit anonymous pages und FS pages gleichgewichtet. Mit 0 - wie bisher auch - wird das swap Verhalten deaktiviert. Je höher also der Wert gesetzt wird, desto höher gewichtet man die anonymous pages. Es wird immer "kalte" Pages geben, welche einmalig in den Speicher geladen wurden und bei denen es Sinn macht sie früher auszulagern bevor der Speicher tatsächlich mal knapp wird.


Details

Früher (vor SLES11 SP3) wurden die anonymous pages nicht in das Verhalten miteinbezogen. Seit dem hatten man eine 1:1 Gewichtung der Kosten (file-cached pages : anonymous pages) eingeführt. Es wurde immer davon ausgegangen die file pages häufiger zu scannen und auszulagern als die anonymous pages, da es als recht teuer und damit als störend empfunden wurde. Es wurde also über Zeit erkannt, dass es in manchen Situationen doch Sinn macht dieses Verhalten zu ändern, daher hat man den Mechanismus granularer gestaltet.


Summary

Am Ende des Tages, wer hätte das gedacht, ist der neue Algorithmus effektiver und sinnvoller, sorgt aber dafür dass es zu höherem Paging Verhalten führt. Das macht aber nichts, da die Pages die ausgelagert werden tatsächlich mehr als "kalt" sind. Es muss sich also keiner Sorgen machen der bei gleichem Workload und Sizing nun erhöhtes Swapaufkommen bemerkt. Dazu gibt es auch neue metriken in vmstat mit denen sich das Verhalten monitoren lässt.


Empfehlung

Wenn es also viel I/O im System anliegt und der Anteil von FS pages gering ist, kann eine Erhöhung des vm.swappiness Parameters die Performance positiv beeinflussen. Daher empfehle ich den Parameter bei SLES15 SP4+ _nicht_ auf 0 oder 1 zu setzen. Tests haben gezeigt, dass sich mit dem Wert von 10 am besten mit HANA und dem damit verbunden Workload arbeiten lässt, allerdings kann auch ein höherer Wert dienlich sein (10 - 45 aus meinen Tests), da es darauf ankommt welche Third-Party Tools auf dem System zusätzlich laufen (AV, backup, monitoring etc.). Dies kann nur mit Tests und längeren Analysen beantwortet werden.


Fazit

Bleibt aber die Frage nach dem richtigen Monitoring offen. Früher hat man alarmiert sobald swap space genutzt wurde, da man davon aus ging, dass ein Speicherengpass vorliegt. Diese Frage muss sich nun jeder selbst stellen und für sich beantworten. Welche Metriken wurden dafür benutzt? Ab welchen Schwellwert alarmiere ich anhand der neuen Metriken? Kann mein Monitoring Tool diese neuen Metriken auslesen? Muss ich mir eine custom Lösung bauen? All das ist abhängig von der aktuellen Monitoringlösung.





################################

English version

################################

Since SLES15 SP4 and RHEL8(also in other Linux distributions) the behavior of reclaim has changed (details)

Before: vm.swappiness Default 60 [Range: 0 - 100]

After: vm.swappiness Default 60 [Range: 0 - 200]


2 types of pages:

  • anonymous page: dynamic runtime data such as stack and heap
  • FS (Filesystem) page: Payload such as application data, shared libs etc.



With a value of 100, anonymous pages and FS pages would be weighted equally. With 0 - as before - the swap behavior is deactivated. The higher the value is set, the higher the anonymous pages are weighted. There will always be "cold" pages that have been loaded into memory once and for which it makes sense to swap them out earlier before the memory actually runs out.


Details


Previously (before SLES11 SP3) anonymous pages were not included in the behavior. Since then, a 1:1 cost weighting (file-cached pages: anonymous pages) has been introduced. It was always assumed that the file pages would be scanned and outsourced more frequently than the anonymous pages, as it was perceived as quite expensive and therefore annoying. Over time it was recognized that in some situations it makes sense to change this behavior, so the mechanism was made more granular.


Summary


At the end of the day, who would have thought, the new algorithm is more effective and sensible, but ensures that it leads to higher paging behavior. But that doesn't matter because the pages that are being swapped out are actually more than "cold". So no one has to worry about noticing increased swap volumes with the same workload and sizing. There are also new metrics in vmstat that can be used to monitor behavior.


Recommendation


So if there is a lot of I/O in the system and the proportion of FS pages is low, increasing the vm.swappiness parameter can have a positive effect on performance. I therefore recommend not setting the parameter to 0 or 1 for SLES15 SP4+. Tests have shown that a value of 10 is best for working with HANA and the associated workload, although a higher value can also be useful (10 - 45 from my tests), as it depends on which third-party tools are used also run on the system (AV, backup, monitoring, etc.). This can only be answered with tests and longer analyses.


Conclusion


However, the question of the correct monitoring remains open. In the past, an alarm was raised as soon as swap space was used because it was assumed that there was a memory bottleneck. Everyone has to ask themselves this question and answer it for themselves. What metrics were used for this? At what threshold do I alert based on the new metrics? Can my monitoring tool read these new metrics? Do I have to build a custom solution? All of this depends on the current monitoring solution.


SAP HANA News by XLC

Why Databases Need Optimizer Statistics – With a Focus on SAP HANA
von Jens Gleichmann 28. Mai 2025
In the world of modern database management systems, query performance is not just a matter of hardware—it’s about smart execution plans. At the core of generating these plans lies a critical component: optimizer statistics. This article explores why databases need optimizer statistics, with particular emphasis on SAP HANA, while drawing parallels with Oracle, Microsoft SQL Server (MSSQL), and IBM DB2.
HANA OS maintenance
von Jens Gleichmann 27. Mai 2025
Please notice that when you want to run HANA 2.0 SPS07, you need defined OS levels. As you can see RHEL7 and SLES12 are not certified for SPS07. The SPS07 release of HANA is the basis for the S/4HANA release 2023 which is my recommended go-to release for the next years. Keep in mind that you have to go to SPS07 when you are running SPS06 because it will run out of maintenance end of 2023.
HANA performance degradation after upgrade to SPS07+SPS08
von Jens Gleichmann 27. Mai 2025
With SPS06 and even stronger in SPS07 the HEX engine was pushed to be used more often. This results on the one hand side in easy scenario to perfect results with lower memory and CPU consumption ending up in faster response times. But in scenarios with FAE (for all entries) together with FDA (fast data access), it can result in bad performance. After some customers upgraded their first systems to SPS07 I recommended to wait for Rev. 73/74. But some started early with Rev. 71/72 and we had to troubleshoot many statement. If you have similar performance issues after the upgrade to SPS07 feel free to contact us! Our current recommendation is to use Rev. 74 with some workarounds. The performance degradation is extreme in systems like EWM and BW with high analytical workload.
The HANA Optimizer is one of the core components of the HANA database. It determines how SQL is exec
von Jens Gleichmann 25. Mai 2025
A database optimizer behaves similarly to the navigation system in your car. You use various parameters to determine which route you want to take to reach a particular destination. Potential additional costs for tolls and ferries also play a role, as do other limitations such as the vehicle's height, length, and width. From these input parameters, the navigation system determines the optimal route based on current traffic information (traffic volume, construction sites, congestion, accidents, closures, etc.), weather conditions, and the length of the route. This means that with exactly identical input parameters, different routes, costs, and thus different travel times can arise depending on the given conditions.
Is NSE worth the effort or is the better question: Do you know your cost per GB of RAM?
von Jens Gleichmann 18. April 2025
Most of our presentations on data tiering projects end with these typical questions: How much we will save? How fast can it be implemented? Is the effort worth it over time? My counter question: "Do you know how much 1 GB of memory costs your company per month or year?" => how much memory we have to save to be beneficial?
Buch: SAP HANA Deepdive
von Jens Gleichmann und Matthias Sander 30. März 2025
Unser erster Buch mit dem Titel "SAP HANA Deepdive: Optimierung und Stabilität im Betrieb" ist erschienen.
More time to switch from BSoH to S/4HANA
von Jens Gleichmann 7. Februar 2025
Recently handelsblatt released an article with a new SAP RISE option called SAP ERP, private edition, transition option. This option includes a extended maintenance until the end 2033. This means 3 years more compared to the original on-prem extended maintenance. This statement was confirmed by SAP on request of handelsblatt, but customers receive more details, such as the price, in the first half of the year. This is a quite unusual move of SAP without any official statement on the news page. Just to raise more doubts? Strategy? However a good move against the critics and the ever shorter timeline. Perhaps it is also a consequence of the growing shortage of experts for operating and migrating the large number of systems.
Optimize your SAP HANA with NSE
von Matthias Sander 15. Januar 2025
When it comes to optimizing SAP HANA, the balance between performance and cost efficiency is critical. I am happy to share a success story where we used the Native Storage Extension (NSE) to significantly optimize memory usage while being able to adjust the sizing at the end. The Challenge: Our client was operating on a 4 TB memory SAP HANA system, where increasing data loads were driving up costs and memory usage. They needed a solution to right-size their system without compromising performance or scalability. The client wanted to use less hardware in the future. The Solution: We implemented NSE to offload less frequently accessed data from memory. The activation was customized based on table usage patterns: 6 tables fully transitioned to NSE 1 table partially transitioned (single partition) 1 table transitioned by specific columns
SAP HANA NSE - a technical deepdive with Q&A
von Jens Gleichmann 6. Januar 2025
SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI's to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. In this blog we will talk about the technical aspects in more detail.
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery
von Jens Gleichmann 5. Januar 2025
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery engagement model dedicated to manage service delivery for RISE with SAP S/4HANA Cloud, private edition customers.
more