Tuesday 24 September 2024

How Data Compression in the Column Store in HANA database works and its details ? #SAP HANA

​Data compression in the SAP HANA database's column store significantly improves storage efficiency and performance.​ The method employs various compression techniques that allow for substantial reductions in data size, thus optimizing memory usage and processing times.

Using SAP column store tables, data compression can achieve ratios of up to 11 times, presenting a viable cost-saving solution for data storage in the HANA database. Such high compression ratios are critical in enhancing the performance of data-intensive applications.

The column store enables efficient compression of data, which reduces the costs associated with keeping data in main memory. This efficiency is essential for managing large datasets typical of enterprise applications.

Columnar data storage facilitates highly efficient compression primarily because most columns contain only a few distinct values compared to rows. This structure allows for targeted data access, improving overall read efficiency and query performance, particularly with extensive data sets3.

Administrators can monitor compression performance and trigger optimizations as needed. By understanding the compression ratios and structures in place, they can ensure that the database operates at peak efficiency, maintaining optimal space usage and improving access speeds.

Data in column tables can have a two-fold compression:

  • Dictionary compression

This default method of compression is applied to all columns. It involves the mapping of distinct column values to consecutive numbers, so that instead of the actual value being stored, the typically much smaller consecutive number is stored.

  • Advanced compression

Each column can be further compressed using different compression methods, namely prefix encoding, run length encoding (RLE), cluster encoding, sparse encoding, and indirect encoding. The SAP HANA database uses compression algorithms to determine which type of compression is most appropriate for a column. Columns with the PAGE LOADABLE attribute are compressed with the NBit algorithm only.

Compression is automatically calculated and optimized as part of the delta merge operation. If you create an empty column table, no compression is applied initially as the database cannot know which method is most appropriate. As you start to insert data into the table and the delta merge operation starts being executed at regular intervals, data compression is automatically (re)evaluated and optimized.

Automatic compression optimization is ensured by the parameter active in the optimize_compression section of the indexserver.ini configuration file. This parameter must have the value yes.

 

Cost Functions for Optimize Compression

The cost functions for optimize compression are in the optimize_compression section of the service configuration (e.g. indexserver.ini)

  • auto_decision_func - if triggered by MergeDog
  • smart_decision_func - if triggered by SmartMerge

 

Note: 

  1. Advanced compression is applied only to the main storage of column tables. As the delta storage is optimized for write operations, it has only dictionary compression applied.
  2. If the standard method for initiating a delta merge of the table is disabled (AUTO_MERGE_ON column in the system view TABLES is set to FALSE), automatic compression optimization is implicitly disabled as well. This is the case even if the AUTO_OPTIMIZE_COMPRESSION_ON column is set to TRUE in the system view TABLES. It is necessary to disable auto merge if the delta merge operation of the table is being controlled by a smart merge triggered by the application. For more information, see the section on merge motivations.

 

Monday 23 September 2024

FAQ: SAP HANA Native Storage Extension (NSE)

1: How can I activate NSE?

NSE can be activated via DDL commands on the desired tables. There are no additional configuration steps needed.

2. I have activated NSE for a database object but want to revert my changes. How can I achieve this?

Changes to database objects can be reverted by setting the load granularity to default.

ALTER TABLE "<table_name>" DEFAULT LOADABLE;

Adding the option CASCADE allows to overwrite all modifications on the below layers:

ALTER TABLE "<table_name>" DEFAULT LOADABLE CASCADE;

 

3. Where do I find information about buffer cache usage?

Information on the buffer cache usage such as hit ratio and size can be obtained in the following locations:

  • M_BUFFER_CACHE_STATISTICS
  • M_BUFFER_CACHE_POOL_STATISTICS
  • SQL: "HANA_NSE_BufferCache" (SAP Note 1969700)
  • M_SQL_PLAN_CACHE (starting SAP HANA 2.0 SPS07, columns *_BUFFER_CACHE_IO_READ_SIZE, *_BUFFER_CACHE_PAGE_MISS_COUNT and *_BUFFER_CACHE_PAGE_HIT_COUNT)
  • SQL: "HANA_SQL_SQLCache_2.00.070+" (SAP Note 1969700)

4. How can I calculate the memory consumption of NSE enabled tables?

As explained in question #1, pages of NSE tables, when loaded into memory, are loaded into the buffer cache. Some helper structures for the columns are still loaded into the main store area of HANA and reside there until the column is unloaded.
After reading an NSE table, its non-pageable area will be loaded to the main store memory, while the pageable area will be loaded into the buffer cache. If, just after this read, you check the memory consumption for the table, the value will be the sum of main store memory and the memory occupied in the buffer cache. Keeping in mind the nature of the Buffer Cache, you will understand that the allocation in the buffer cache is transient. As soon as another NSE table is read, those pages loaded into the buffer cache may be displaced in a least-recently used fashion. Or they may stay there for longer, if there is enough space in the buffer cache to accommodate both tables. Given the transient characteristic of the pages loaded into the buffer cache, for the purpose of determining memory savings brought by NSE, this memory area should not be considered for the definition of the total memory allocated by the NSE table.

SQL: "HANA_Tables_ColumnStore_TableSize_2.00.030+" (SAP Note 1969700) reports the NSE-related memory consumption in column PAGED_GB.

5. Where is the buffer cache located?

The buffer cache is stored in the heap allocator Pool/CS/BufferPage. The allocator always resides in DRAM. This also applies to systems using PMEM or Fast Restart Option.

6. How does the buffer cache behave during memory pressure?

The NSE Buffer cache is not unloaded as part of shrink operations. As a result, the memory used by the NSE cache cannot be released and be reused by other components in case the database runs out of memory.

To avoid memory issues, the buffer cache should be configured with a reasonable size depending on the size of tables located in NSE.

7. How to handle NSE advisor recommendations to move columns like $trexexternalkey$ or $AWKEY_REFDOC$GJAHR$?

$trexexternalkey$ or $AWKEY_REFDOC$GJAHR$ are internal columns (SAP Note 1986747) that refer to the primary key respectively a multi-column index/concat attribute. Indexes can be moved to NSE as well.

a) identify index name:

SELECT SCHEMA_NAME, TABLE_NAME, INDEX_NAME, INDEX_TYPE, CONSTRAINT FROM INDEXES WHERE TABLE_NAME = '<table_name>';

b) change the index load granularity

ALTER INDEX "<schema>"."<index_name>" PAGE LOADABLE;example: ALTER INDEX "PLAYGROUND"."_SYS_TREE_CS_#170484_#0_#P0" PAGE LOADABLE;

 

8. How can I get information on the impact of NSE on query performance?

Starting SAP HANA 2.0 SPS07 monitoring view M_SQL_PLAN_CACHE contains several additional columns  that provide deeper insight on NSE usage during query execution (*_BUFFER_CACHE_IO_READ_SIZE, *_BUFFER_CACHE_PAGE_MISS_COUNT and *_BUFFER_CACHE_PAGE_HIT_COUNT).

The following sql commands in SAP Note 1969700 exist to support the investigation:

  • SQL: "HANA_SQL_SQLCache_2.00.070+" 
  • SQL: "HANA_SQL_SQLCache_TopLists_2.00.070+" 

9. Does DDIC support NSE?

Since S/4 HANA 2021, DDIC supports load unit settings "Page Preferred" and "Page Enforced". Depending on the S/4 HANA release, these settings can change.
The current configuration can either be checked via SE11 -> "Technical Settings" -> "DB-Specific Properties".
Alternatively, you can query the details directly in DBACOCKPIT via the following SQL:

select
    t.TABLE_NAME, t.LOAD_UNIT HDB_LOAD_UNIT,
    CASE WHEN d.LOAD_UNIT = 'P' THEN 'Page Preferred'
      WHEN d.LOAD_UNIT = 'Q' THEN 'Page Enforced' END DDIC_LOAD_UNIT
FROM
    TABLES t full outer join DD09L d ON t.table_name = d.tabname
WHERE
    (t.LOAD_UNIT = 'PAGE' OR d.LOAD_UNIT IN ('P', 'Q')) AND t.SCHEMA_NAME LIKE 'SAP%';

For further details on DDIC integration, please see SAP Note 2973243.

10. When are DDIC load unit settings applied and what happens if an upgrade changes the DDIC load unit?

Whether load unit settings are applied automatically depends on the scenario:

  1. When converting from anyDB to SAP S/4 HANA, the DDIC load unit is applied in the course of the migration, tables with "page preferred" and "page enforced" are created using NSE.
  2. fresh installation of S/4 HANA 2021+ is performed. The DDIC load unit is applied directly during installation.
  3. conversion from Suite on HANA to S/4 HANA or a S/4 HANA upgrade does not enforce DDIC load unit settings. In this scenario only exchange tables are created with the new settings. The load unit setting of existing tables will be preserved. Only a change to "Page Enforced" will result in a load unit change on the database. Other SAP_BASIS objects with load unit "Page Preferred" such as BALDAT, CDPOS, and EDID4 stay as is and will not be automatically changed.

11. Do I need additional disk space when using NSE?

During conversion to page loadable, the persistence structures are rewritten for optimal usage with NSE. In most cases, this does not change the total amount of disk space needed. However, in rare cases tables, partitions, and columns that are converted to NSE may see an increased disk usage up to 80 %. Some column data types (like TIMESTAMP, DATE, CHAR, VARCHAR and NVARCHAR) do not support all possible dictionary compressions if columns are stored as page loadable.

12. Is the buffer cache preloading data after a system restart just like the normal column store does?

No, the buffer cache will be empty after restart and only be populated on demand.

Note: 

Please refer to the SAP note to get full details. 

2799997 - FAQ: SAP HANA Native Storage Extension (NSE)

what is SAP HANA Native Storage Extention? #Data Archiving

SAP HANA 2 SPS 04 provides a new feature called SAP HANA Native Storage Extension (NSE). NSE is a disk-based extension to the in-memory COLUMN STORE in SAP HANA. Instead of loading a whole column, only the needed pages are loaded into the buffer cache when accessing the data.




















SAP HANA Native Storage Extension (NSE) is a general-purpose, built-in warm data store in SAP HANA that lets you manage less-frequently accessed data without fully loading it into memory. It integrates disk-based or flash-drive-based database technology with the SAP HANA in-memory database for an improved price-performance ratio. 

SAP HANA offers various software solutions to manage multi-temperature data (hot, warm, and cold), such as DRAMs for hot data and SAP HANA Extension Nodes, SAP HANA dynamic tiering for warm data, and SAP HANA Cold Data Tiering for cold data. 

Hot data is used to store mission-critical data for real-time processing and analytics. It is retained continuously in SAP HANA memory for fast performance and is located in the highest performance (and highest TCO) storage. 

Warm data is primarily used to store mostly read-only data that need not be accessed frequently. The data need not reside continuously in SAP HANA memory, but is still managed as a unified part of the SAP HANA database ― transactionally consistent with hot data, and participating in SAP HANA backup and system replication operations, and is stored in lower cost stores within SAP HANA. 

Cold data is used to store read-only data, with very infrequent access requirements. You manage cold data separately from the SAP HANA database, but you can still access it from SAP HANA using SAP HANA’s data federation capabilities. This image shows the difference between standard HANA in-memory storage and the storage offered with NSE:



























The capacity of a standard SAP HANA database is equal to the amount of hot data in memory. However, the capacity of a SAP HANA database with NSE is the amount of hot data in memory plus the amount of warm data on disk.

Since growth in data volume results in increased hardware costs, the ability to decouple data location from a fixed storage location (layer) is one of the key themes of a multi-temperature data storage strategy.

NSE is integrated with other SAP HANA functional layers, such as query optimizer, query execution engine, column store, and persistence layers. Key highlights of NSE are:

  • A substantial increase in SAP HANA data capacity, with good performance for high-data volumes.
  • The ability to co-exist with the SAP HANA in-memory column store, preserving SAP HANA memory performance.
  • An enhancement of existing in-market paging capabilities by supporting compression, dictionary support, and partitioning.
  • An intelligent buffer cache that manages memory pages in SAP HANA native storage extension column store tables.
  • The ability to monitor and manage buffer cache statistics via system views.
  • The ability to support any SAP HANA application.
  • A simple system landscape with high scalability that covers a large spectrum of data sizes.
  • An advisor that collects object access statistics and provides column store object load unit recommendations.

Note: 

  1. The NSE feature in SAP HANA does not require you to modify your applications.
  2. Although SAP HANA 2.0 calculates 10% of memory for the buffer cache by default, this memory is only reserved and not allocated. SAP HANA accesses 100% of its memory (including the 10% reserved for the buffer cache) if you are not using NSE.
  3. If there are page loadable tables in your current version of SAP HANA and you move to another, later, version, only those tables that were designated as page-loadable in the earlier version use the buffer cache in the later version (up to the limit that was calculated in the original version of SAP HANA you were running).

 

 

What is SAP Information Lifecycle Management (SAP ILM) ?

Information has a lifecycle. It is created, it lives within databases and systems, it changes, and it is archived and eventually deleted. With SAP Information Lifecycle Management (SAP ILM), companies can meet their data retention, data destruction, and system decommissioning requirements and obtain compliance with legal and regulatory mandates. As a result, SAP Information Lifecycle Management (SAP ILM) helps companies streamline their technical infrastructure, reduce IT costs, and improve IT risk and compliance management.

SAP Information Lifecycle Management (SAP ILM) is based on the following pillars:

 • Data archiving (active data and system):

  • Analyze data volumes
  • Securely move data from the database to the archive
  • Access archived data conveniently

• Retention management (end-of-life data):

  • Define and manage all retention policies across the enterprise
  • Manage the destruction of data responsibly based on policies
  • Enforce retention policies
  • Use secure information lifecycle management–aware storage (partner offerings)
  • Perform e-discovery and set legal holds

• System decommissioning (end-of-life system):

  • Decommission SAP and non-SAP legacy systems to a central retention warehouse 
  • Enforce retention policies on data from the shut-down system
  • Run reporting on data from the shut-down system (SAP Business Warehouse (SAP BW) and local reporting)
  • Use predefined business warehouse queries for reporting
  • Interpret and understand data in archives without the help of the original system

 

To learn more about SAP Information Lifecycle Management (SAP ILM), 

please contact your SAP representative, write to us at ilm@sap.com, or visit us on the Web at http://scn.sap.com/community/information-lifecycle-management.

Thursday 19 September 2024

How to check SAP Application log Statistics with help of Program "SBAL_STATISTICS" ? # SAP Basis Administrator

To check SAP Application log statistics you can use the program "SBAL_STATISTICS," you can utilize the reporting capabilities it provides for analyzing and obtaining detailed statistical information about application logs generated by various applications within SAP. This program serves as a useful tool for effectively monitoring and managing log entries.


Based on the findings from the SBAL_STATISTICS report, administrators may decide to clean up old logs, adjust logging levels, or take corrective actions on applications based on the insights derived from the statistics of application logs.

Accessing the Program

You can access the program through Tcode "SA38", as shown below






Click on execute and select the statistic collection option as per requirement. 


STATISTIC COLLECTIONS

TIME CREATION: 

get the number of logs for every month in a year, how many logs are in each month

CLIENTS :

get the number of logs for every client, how many logs are in each client














OBJECTS, Subobject, messages :

detailed overview based on OBJECT and SUBOBJECT of logs

columns:

  • OBJECT, SUBOJECT (defined in TCODE SLG0)
  • MIN DATE - the oldest creation date ( highlighted field is older than 180 days)
  • MAX DATE - the latest creation date ( highlighted field is older than 180 days)
  • Logs Cnt - number of logs
  • MsgCnt All - number of messages
  • MsgCnt E, W, I, S, A - number of message with type Error, Warning, Information, Success, Cancel
  • reached exp date - number of logs they reached expiry date and can be deleted via option SLG2 - only logs that have reached their expiration date
  • and can be deleted - number of logs can be deleted via option SLG2 - And logs that can be deleted before the expiration date
  • cannot be deleted - number of logs cannot be deleted via option SLG2 - Non-deletable logs (display only). Such logs can be deleted only be the application that created the logs.
  • never expire - number of logs that expity date is set to 31/12/9999
  • % - number of % logs in current client.











USERS: 

get the number of logs for every user

PROGRAMS: 

get the number of logs for the program

TCODE: 

get the number of logs for TCODE

EXPIRATION DATE: 

get the number of logs for each option of Expiration Date (SLG2)

- Only logs that have reached their expiration date

- And logs that can be deleted before the expiration date

- Non-deletable logs (display only)


RELID data, BALDAT: 

get number records with different RELID in BALDAT

Statistic logs

to view data select Option and click on Execute. 






You can view log details under Details as shown below.





Documents referred : 2524124 - Application log Statistics

Tuesday 17 September 2024

What's LOB and the importance in HANA database ?

 In the SAP HANA database, Large Objects (LOBs) are unstructured data types such as images, videos, or documents. The importance of LOBs in HANA, particularly regarding their disk size, is significant for performance, storage management, and application development.

1. Definition of LOBs

LOBs, or Large Objects, refer to unstructured data types that include items such as pictures, PDFs, and XML content1. They are characterized by their capability to be quite large, which necessitates special considerations in terms of storage and performance within databases like SAP HANA4.

2. Disk Size Limitation

In SAP HANA, the current maximum size for a LOB is 2 GB2. This limitation is critical as it influences how data is managed within the database and affects the strategies for data storage and retrieval employed by application developers.

3. Storage of LOBs in HANA

SAP HANA can store large binary objects (LOBs) such as images or videos on disk, rather than inside column or row structures in main memory3. This distinction is vital for performance optimization, as storing large objects on disk allows for more efficient use of memory resources in main operations.

4. Impact on Performance

The way LOBs are managed in SAP HANA—their storage on disk instead of main memory—has implications for startup times and takeover processes5. As LOBs can occupy considerable storage space, their management directly affects the overall efficiency of data access patterns in applications utilizing HANA.

5. Application Development Considerations

Understanding the size and nature of LOBs is essential for application developers working with SAP HANA6. They must consider these aspects when designing data models and structures to ensure optimal performance and adherence to size constraints for LOB storage.

How does SAP table "BALDAT" enhance the performance of the SAP system, and why is it important?

 #SAP Basis Administrator

The BALDAT table in SAP serves as a standard repository for logging application events.​ It is crucial for tracking, analyzing, and troubleshooting various activities within the SAP system. Understanding its technical details and significance will aid users in managing logs efficiently, enhancing system performance, and maintaining operational integrity.

1. Overview of the BALDAT Table

The BALDAT table, also referred to as "Application Log: Log Data," is a standard table used within SAP R/3 ERP systems to store detailed log information related to application processes. It provides a structured format for capturing critical data necessary for monitoring system activities and auditing changes made during transactions.

2. Technical Structure

The BALDAT table comprises several fields that define the data structure, including, but not limited to:

  • MANDANT (Client): Identifies the client within which the log entry is stored, fundamental for data organization and segregation across different business environments2.
  • RELID (Region in IMPORT/EXPORT Data Table): This field helps to indicate the region related to log entries, facilitating more refined data categorization2.
  • LOG_HANDLE (Application Log: Log Handle): A unique identifier for each log entry, ensuring traceability and reference2.
  • BLOCK (Internal Message Serial Number): Numbers log entries serially to assist in organizing log data2.
  • Error Tracking and Troubleshooting: It captures detailed error messages, warnings, and informational logs generated during transactions, allowing administrators to investigate issues effectively7. This capability is vital for diagnosing application performance and operational integrity.
  • Auditing and Security: Logs maintained in BALDAT can be leveraged for auditing purposes. They provide insights into user activities, changes made to data, and potential security violations, ensuring compliance with regulatory standards7.
  • Job Monitoring: The table records logs related to job scheduling and execution, allowing administrators to track the status and outcomes of automated tasks and background processes, enhancing overall system management7.

Additional fields track the length of user data, data types, and various parameters related to the logs.

3. Importance of BALDAT in SAP Systems

The BALDAT table holds significant importance in an SAP system for various reasons:

4. Log Management and Maintenance

Given that the application logs can grow exponentially, regular maintenance of the BALDAT table is critical. Tools like the SBAL_DELETE report can be utilized to automate the cleanup of outdated log entries, minimizing performance degradation due to excessive log data. Furthermore, administrators can analyze logs using transactions such as SLG2 to identify non-deletable logs and manage retention effectively.

Please refer to SAP Note: 

195157 - Application log: Deletion of logs

2524124 - Application log Statistics

5. Conclusion and Best Practices

In summary, the BALDAT table's structured logging capability and detailed recording of application activities make it an essential component of the SAP ecosystem. Regular monitoring and maintenance of this table, coupled with a proactive cleanup strategy, can significantly enhance system performance and operational compliance. Best practices include the periodic execution of log cleanup reports, adherence to logging policies, and the implementation of comprehensive log analysis procedures.

This comprehensive overview provides a clearer understanding of the technical specifications and significance of the BALDAT table within SAP systems.

 

Monday 16 September 2024

How to Switch HANA alerts off/on with SQL Statement ?

 we can switch off the underlying alert checker, so it no longer runs automatically or you can switch it back on again using the following methods:

If you need to disable or enable these alerts in batch (more than one at once), the easy way is to manipulate STATISTICS_SCHEDULE (_SYS_STATISTICS).
STATISTICS_SCHEDULE Table (Embedded Statistics Service)
Statistics schedule per host. This view contains information only for the last 7 days.

 

Caution
If you switch off alerts, you may not be warned about potentially critical situations in your system.
Be careful when modifying data on SYSTEM_STATISTICS through SQL Script.

For example: You want to disable all the schedules for the alerts shown in both HANA Cockpit > Configure Alerts / HANA studio > Alerts. Run the following statement:

UPDATE _SYS_STATISTICS.STATISTICS_SCHEDULE SET "STATUS" = 'Inactive' where "ID" between 1 and 710

In case you want to activate them back to the original schedule, you can run the following statement:

UPDATE _SYS_STATISTICS.STATISTICS_SCHEDULE SET "STATUS" = 'Idle' where "ID" between 1 and 710

You can build your own scripts and procedures for this purpose. For example, to save the current status of the alerts to restore later or to just disable a few alerts for a given maintenance window.