SAND

3 keys to kickstarting SAP HANA In-Memory Computing

Home »  SAND News »  How To »  Andy Bayliss »  3 keys to kickstarting SAP HANA In-Memory Computing

3 keys to kickstarting SAP HANA In-Memory Computing

On April 20, 2011, Posted by , In Andy Bayliss, With No Comments

3 keys to kickstarting SAP HANA In-Memory ComputingConsidering the new SAP HANA (High-Performance Analytic Appliance), in-memory computing platform to improve your database performance and wondering how to get it implemented faster and with lower total cost? here’s what you need to do:

1. **Understand the licensing**. The licensing of the new HANA database is based on data sizing, and the entry cost for an enterprise could be high when you take into consideration all your environments, such as development, QA, test, pre-production and production environments. For example, an SAP BW production environment of 10TB could represent a total environment of 30TB. Those high costs and data migration processes could reduce the ROI associated with the performance improvement.

2. **Know your data**. System memory (RAM) is expensive and not all data is created equal. While you may have 10TB of data in your warehouse, not all of it will need to reside in-memory all the time. For example, perhaps only 20% of the data is used often and urgently enough to benefit from HANA. Using expensive memory for the other 80% will significantly drive up costs while providing no additional performance benefit.

3. **Implement Nearline first.** SAP Nearline is fully compatible with HANA: the same level of Information Lifecycle Management functionalities and benefits can be achieved. The Nearline component enables an enterprise to migrate data that is not often used to the Nearline repository and still maintain transparent access to it. This implementation directly reduces the size of the data kept in the online database. The data footprint reduction could be as big as 80%. Coming back to our prior example, a Nearline implementation could reduce the total SAP BW space requirement from 30TB to 6TB, with a direct HANA entry cost reduction.

What good is a racing car if it’s too weighed down to get off the starting line? An enterprise looking to migrate to SAP In-Memory Computing should proceed with a Nearline implementation first to reduce their migration costs, but also to reduce the amount of data that will have to be migrated. The data moved to the Nearline repository prior to the HANA implementation will not have to be migrated and will be automatically accessible through HANA. This strategy not only greatly reduces the overall cost, but also secures the full process.

In a nutshell, put your database on a diet using Nearline prior to the HANA implementation.

Leave a Reply