HANA News Blog

Beware of cloud costs

Jens Gleichmann • 13. September 2024

Unforeseen cloud cost increases

    The TCO calculation of cloud costs is a moving target. In the past, all hyperscalers have increased their prices or, like other cloud service providers such as SAP, have directly included an annual increase in the contract. The focus is usually directly on the services consumed, but associated costs, which are still small today and have never been on the radar, can explode. This is the case if the providers have changed their licensing/subscription model. Most recent case to name is Broadcom/VMware but now also another vendor announced a change: RedHat.

Most SAP systems are still running on SUSE and as far as I know also all RISE (+GROW) with SAP systems (public+private). In the past more and more customers changed the OS strategy due to lower/equal costs and RHEL features. They migrated their systems to RHEL.

RedHat announced back in January this year that the costs for cloud partners will be changed effective April 1, 2024. They called it scalable pricing. The previous pricing structure featured a two-tiered system that categorized VMs as either "small" or "large" based on the number of cores, or vCPUs, with fixed prices assigned to each category regardless of the VM's actual size. The duration of VM allocation determined subscription fees, as the number of cores or vCPUs did not influence the pricing, resulting in a capped cost for RHEL subscriptions.

Old RHEL Subscription model categories


  • Red Hat Enterprise Linux Server Small Virtual Node (1-4 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Large Virtual Node (5 or more vCPUs or Cores)

This means the new price is depending on the number of vCPUs. Unlike before, the cost of a subscription will no longer be capped. Now some of you remember the chaos of the socket subscription or the oracle licensing model incl. the analogy of the parking spots. This means a physical core costs as much as the virtual core. Ok, fair enough if I use more cores I have to pay more. But this means the costs can also be lower for systems with less vCPUs, right? It can be a good thing, this new fancy "scalable pricing". So, when the costs are dropping and rising and who is affected?

Redhat:

"In general, we anticipate that the new RHEL pricing to cloud partners will be lower than the current pricing for small VM/instance sizes; at parity for some small and medium VM/instance sizes; and potentially higher than the current pricing for large and very large VM/instance sizes."


    New RHEL Subscription model categories:


  • Red Hat Enterprise Linux Server Small Virtual Node (1-8 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Medium Virtual Node (9 - 128 vCPUs or Cores)
  • Red Hat Enterprise Linux Server Large Virtual Node (129 or more vCPUs or Cores)


Amazon's statement:


"Any new product (such as a new instance type, new instance size, new region, etc.) launched after July 1, 2024, will use the new RHEL pricing. Therefore, any new rates (such as new regions or new instance types) launched after July 1, 2024, will be on the new RHEL pricing, even if the Savings Plans were purchased prior to July 1, 2024."

Microsoft Azure statement:


"In response to Red Hat’s price changes, Azure will also be rolling out price changes for all Red Hat Enterprise Linux instances. These changes will start occurring on April 1, 2024, with price decreases for Red Hat Enterprise Linux and RHEL for SAP Business Applications licenses for vCPU sizes less than 12. All other price updates will be effective July 1, 2024."

GCP statement:


"On January 26, 2024, Red Hat announced a price model update on RHEL and RHEL for SAP for all Cloud providers that scales image subscription costs according to vCPU count. As a result, starting July 1, 2024, any active commitments for RHEL and RHEL for SAP licenses will be canceled and will not be charged for the remainder of the commitment's term duration."

This means with the receiving of the July invoice in August the first customer took notice of the change. The smart cloud architects who considered the usage of cloud Finops noticed it earlier ;)

Let's have a look at the MS Azure pricing calculator, because they still have the most market share. With 12 or less vCPUs you will save money. But what does it mean for customers running SAP HANA instances, due to the fact that the smallest MS Azure instance is 20vCPUs? The cost of RHEL for SAP was previously $94.90 (constant line at the bottom of the graph) per month for all certified HANA instances.   

MS Azure: RHEL price per certified HANA SKU [source: MS price calc: europe west / germany west central]


If you are running large instances with 128 and more vCPUs you will see that the costs are differ with more than $10,000 per month! If you are using large sandboxes or features like HANA system replication (HSR), you can multiple this additional costs.   

MS Azure: Difference of 3 years reserved instances (3YRI) RHEL for SAP total instance costs [source: MS price calc: europe west / germany west central]


If we drill it down to the RHEL only costs per month, we can also the see the price per vCPU and the switch from 64vCPU to 128vCPU price.   

MS Azure: Difference of 3 years reserved instances (3YRI) RHEL for SAP costs per month (before vs. now) [source: MS price calc: europe west / germany west central]
MS Azure: total RHEL pricing per SKU with costs per vCPU [source: MS price calc: europe west / germany west central]


Summary

For those using RHEL for small instances < 12 vCPUs the new RHEL pricing model is a saving. For SAP customers with larger instances it is a nightmare with unplanned additional costs which take directly effect by July 1, 2024.

Paying the same price for a physical core as for a virtual core is a simple calculation, but it does not have the same value. Anyone using multithreading will be penalized more heavily, and once again the customer will question whether Intel is worth the effort when comparing the cost and the performance gained.

Are customer now stop migrating to RHEL in cloud projects due to this heavy price difference?

Is it possible to get some RHEL volume discount?

The previous TCO calculation can be thrown in the trash.

Strange findings:


  • price calculators of Google and MS Azure are not up-to-date with the new RHEL pricing model
  • if you choose RHEL for SAP with HA option you get the old price of $87,60 for <= 8vCPUs and $204,40 for larger instances in the price calculator of MS Azure
  • MS Azure is calculating the large virtual node price ($7.01 per vCPU) for the 128vCPU SKUs which should have the medium virtual node price ($7.88 per vCPU) by definition

Options:


  1. AWS and may be some others can provide RHEL volume discounts in exchange for an annual spend commitment. Please work with your Technical Account Manager to assess if you qualify.
  2. Resize your instances which do currently not need the number of vCPUs
  3. migrate to SUSE (but sure SUSE will also adapt their costs in the near future - just a matter of time)


SAP HANA News by XLC

Why Databases Need Optimizer Statistics – With a Focus on SAP HANA
von Jens Gleichmann 28. Mai 2025
In the world of modern database management systems, query performance is not just a matter of hardware—it’s about smart execution plans. At the core of generating these plans lies a critical component: optimizer statistics. This article explores why databases need optimizer statistics, with particular emphasis on SAP HANA, while drawing parallels with Oracle, Microsoft SQL Server (MSSQL), and IBM DB2.
HANA OS maintenance
von Jens Gleichmann 27. Mai 2025
Please notice that when you want to run HANA 2.0 SPS07, you need defined OS levels. As you can see RHEL7 and SLES12 are not certified for SPS07. The SPS07 release of HANA is the basis for the S/4HANA release 2023 which is my recommended go-to release for the next years. Keep in mind that you have to go to SPS07 when you are running SPS06 because it will run out of maintenance end of 2023.
HANA performance degradation after upgrade to SPS07+SPS08
von Jens Gleichmann 27. Mai 2025
With SPS06 and even stronger in SPS07 the HEX engine was pushed to be used more often. This results on the one hand side in easy scenario to perfect results with lower memory and CPU consumption ending up in faster response times. But in scenarios with FAE (for all entries) together with FDA (fast data access), it can result in bad performance. After some customers upgraded their first systems to SPS07 I recommended to wait for Rev. 73/74. But some started early with Rev. 71/72 and we had to troubleshoot many statement. If you have similar performance issues after the upgrade to SPS07 feel free to contact us! Our current recommendation is to use Rev. 74 with some workarounds. The performance degradation is extreme in systems like EWM and BW with high analytical workload.
The HANA Optimizer is one of the core components of the HANA database. It determines how SQL is exec
von Jens Gleichmann 25. Mai 2025
A database optimizer behaves similarly to the navigation system in your car. You use various parameters to determine which route you want to take to reach a particular destination. Potential additional costs for tolls and ferries also play a role, as do other limitations such as the vehicle's height, length, and width. From these input parameters, the navigation system determines the optimal route based on current traffic information (traffic volume, construction sites, congestion, accidents, closures, etc.), weather conditions, and the length of the route. This means that with exactly identical input parameters, different routes, costs, and thus different travel times can arise depending on the given conditions.
Is NSE worth the effort or is the better question: Do you know your cost per GB of RAM?
von Jens Gleichmann 18. April 2025
Most of our presentations on data tiering projects end with these typical questions: How much we will save? How fast can it be implemented? Is the effort worth it over time? My counter question: "Do you know how much 1 GB of memory costs your company per month or year?" => how much memory we have to save to be beneficial?
Buch: SAP HANA Deepdive
von Jens Gleichmann und Matthias Sander 30. März 2025
Unser erster Buch mit dem Titel "SAP HANA Deepdive: Optimierung und Stabilität im Betrieb" ist erschienen.
More time to switch from BSoH to S/4HANA
von Jens Gleichmann 7. Februar 2025
Recently handelsblatt released an article with a new SAP RISE option called SAP ERP, private edition, transition option. This option includes a extended maintenance until the end 2033. This means 3 years more compared to the original on-prem extended maintenance. This statement was confirmed by SAP on request of handelsblatt, but customers receive more details, such as the price, in the first half of the year. This is a quite unusual move of SAP without any official statement on the news page. Just to raise more doubts? Strategy? However a good move against the critics and the ever shorter timeline. Perhaps it is also a consequence of the growing shortage of experts for operating and migrating the large number of systems.
Optimize your SAP HANA with NSE
von Matthias Sander 15. Januar 2025
When it comes to optimizing SAP HANA, the balance between performance and cost efficiency is critical. I am happy to share a success story where we used the Native Storage Extension (NSE) to significantly optimize memory usage while being able to adjust the sizing at the end. The Challenge: Our client was operating on a 4 TB memory SAP HANA system, where increasing data loads were driving up costs and memory usage. They needed a solution to right-size their system without compromising performance or scalability. The client wanted to use less hardware in the future. The Solution: We implemented NSE to offload less frequently accessed data from memory. The activation was customized based on table usage patterns: 6 tables fully transitioned to NSE 1 table partially transitioned (single partition) 1 table transitioned by specific columns
SAP HANA NSE - a technical deepdive with Q&A
von Jens Gleichmann 6. Januar 2025
SAP NSE was introduced with HANA 2.0 SPS04 and based on a similar approach like data aging. Data aging based on a application level approach which has a side effect if you are using a lot of Z-coding. You have to use special BADI's to access the correct data. This means you have to adapt your coding if you are using it for Z-tables or using not SAP standard functions for accessing the data in your Z-coding. In this blog we will talk about the technical aspects in more detail.
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery
von Jens Gleichmann 5. Januar 2025
The SAP Enterprise Cloud Services Private Cloud Customer Center (PC3) - a new digital delivery engagement model dedicated to manage service delivery for RISE with SAP S/4HANA Cloud, private edition customers.
more