Tuesday, 26 November 2024

Which SAP tools we can use for mass data extraction from SAP?

There are several tools available for mass data extraction from SAP systems.​ These tools are designed to efficiently retrieve large volumes of data, enabling businesses to perform analytics, reporting, and integration with other systems. Below are some of the primary tools used for mass data extraction:


1. SAP Data Services


SAP Data Services is a powerful extraction, transformation, and loading (ETL) tool that supports data integration across various sources. It enables businesses to extract data from both SAP and non-SAP systems efficiently.


Key Features:

Supports integration from multiple sources, including databases and flat files.

Offers robust transformation capabilities to prepare data for analysis.

Allows for batch processing, which is useful for handling large data volumes.


2. SAP Business Warehouse (BW)


SAP BW is predominantly used for reporting and analysis but also serves as a powerful tool for mass data extraction. It allows users to extract data from various SAP applications and can consolidate this data into a central repository.


Key Features:

Provides pre-defined extractors that pull data from various SAP modules.

Highly customizable, allowing for tailored data extraction processes.

Integration with SAP HANA enhances query performance and reporting capabilities.


3. Legacy System Migration Workbench (LSMW)


LSMW is specifically designed for transferring data from legacy systems into SAP. It supports various methods for mass data extraction, including direct input and batch input methods.


Key Features:


  1. Allows the mapping and transformation of data from external systems.
  2. Supports the import of large datasets into SAP using predefined data structures.
  3. Suitable for both initial data uploads and ongoing data migrations.


4. SAP Landscape Transformation (SLT)

SLT is primarily used for real-time or scheduled data replication from SAP systems to SAP HANA databases or other target systems. It is effective for mass data extraction, especially in complex SAP environments.


Key Features:

  1. Supports trigger-based replication for real-time data synchronization.
  2. Flexible extraction configurations allow for specific data selections.
  3. Capable of handling large datasets efficiently while ensuring data integrity during the transfer.


5. Operational Data Provisioning (ODP)

ODP is a framework that enables efficient data extraction from SAP source systems. It supports both real-time data lookup and bulk data extraction.

Key Features:

  1. Optimizes data transfer processes, reducing the impact on the source system.
  2. Increases flexibility by allowing integration with various SAP and non-SAP sources.
  3. Facilitates data replication and loading into data warehouses or external applications.


6. SAP HANA Smart Data Integration (SDI)


SDI allows for seamless data access and integration from diverse sources into SAP HANA. It provides capabilities to extract data from both homogeneous (SAP) and heterogeneous (non-SAP) systems.


Key Features:

  1. Real-time data replication and transformation capabilities.
  2. Supports a variety of data sources, enhancing flexibility in data management.
  3. Enables automated extraction processes, which is beneficial for large-scale operations.


Conclusion

These tools represent a collection of solutions available for mass data extraction from SAP systems. Each tool serves distinct purposes and comes with unique features tailored to meet specific business needs. Organizations can leverage these tools to enhance their data management processes, enabling better insights and decision-making capabilities. Implementing the right combination of these tools can significantly streamline data extraction tasks and improve operational efficiency.


Monday, 25 November 2024

Error codes during logon (list) # SAP Basis Adminstrator

 ERROR: 

1:   During an (RFC) logon, the system displays the following text:

"You are not authorized to logon to the target system (error code...)"

with an error code number whose meaning is unclear to you.


2:  You find the following unfamiliar lines in the developer trace file (dev_w..):

DyISigni: client=..., user=..., lang=... , access=..., auth=...

usrexist: effective authentification method: ....

DyISigni: return code=... (see Note 320991)


The extended trace messages (starting from trace level 2, for the "Security" component, you can activate them dynamically using transaction SM50) are available starting from the following kernel versions:


4.6D kernel starting from patch level 141

4.5B kernel starting from patch level 506

Explanation of the Error Codes/Return Codes

0 No error - successful logon

1 Incorrect logon data (client, user name, password)
2 User account is locked
3 Incorrect logon data; for SAPGUI: connection closed
4 Successful logon using virtual user or emergency super user
5 Error when constructing the user buffer (==> possible follow-on error)
6 User exists only in the central user administration (CUA)
7 Invalid user type
8 User account outside validity period
9 SNC name and specified user/client do not match
10 Logon requires SNC (Secure Network Communication)
11 No ABAP user with this SNC name exists in the system
12 ACL entry for SNC-secured server-server link is missing
13 No suitable SAP account found for the SNC name
14 Ambiguous assignment of SNC names to ABAP users
15 Unencrypted SAP GUI connection refused
16 Unencrypted RFC connection refused
20 Logon using logon/assertion ticket is generally deactivated
21 Syntax error in received logon/assertion ticket or reentrance ticket not valid
22 Digital signature check for logon/assertion ticket fails
23 Logon ticket/assertion issuer is not in the ACL table
24 Logon/assertion ticket is no longer valid
25 Assertion ticket receiver is not the addressed recipient
26 Logon/assertion ticket contains no/an empty ABAP user ID
27 Reauthorization check: ticket does not match current user
28 Ticket logon denied by security policy
30 Logon using X.509 certificate is generally deactivated
31 Syntax error in the received X.509 certificate
32 X.509 certificate does not originate from the Internet Transaction Server
34 No suitable ABAP user found for the X.509 certificate
35 Ambiguous assignment of X.509 certificate to ABAP users
36 36 Certificate is older than the date entered as "min. date" (USREXTID)
37 X.509 certificate is not currently valid
41 No suitable ABAP user found for the external ID
42 Ambiguous assignment of external ID to ABAP users
50 Password logon was generally deactivated or denied by security policy
51 Initial password has not been used for too long
52 User does not have a password
53 Password lock active (too many failed logons)
54 Productive password has not been used for too long
60 SPNego logon denied by security policy
61 Invalid SPNego token (syntax)
62 NTLM token received instead of SPNego token
63 Missing/incorrect Kerberos keytab entry
64 Invalid SPNego token (time)
65 SPNego replay attack detected
66 SPNego: Error when creating the SNC name
67 SPNego: No suitable SAP account found for the SNC name
68 SPNego: Ambiguous assignment of SNC names to ABAP users
69 Reauthentication check: SPNego token does not match current user
100 Client does not exist
101 Client is currently locked for logons
102 External WebSocket RFC communication is not allowed (RFC runtime)
103 External WebSocket RFC communication requires alias user (RFC runtime)
104 System is in maintenance mode and locked against logons
110 Tenant was stopped (runlevel STOPPED)
111 Tenant cannot be used generally (runlevel ADMIN)
112 No authorization to log on to the current logon category
120 Server does not allow logon
121 No special rights for logon on this server
300-399 OpenID connect (OIDC) error; see SAP Note 3111813
1001 Password is initial/has expired - interactive change required (RFC/ICF)
1002 Trusted system logon failed (no S_RFCACL authorization)
3000 Reauthorization check: SAML bearer assertion is not compatible with current user
3001 Internal SAML bearer assertion verification error
3002 SAML bearer assertion could not be parsed
3003 SAML bearer assertion was already used (replay)
3004 SAML bearer assertion could not be assigned to a user
3005 Issuer of SAML bearer assertion is not trusted
3006 NameID format of SAML bearer assertion is not supported
3007 Signature of SAML bearer assertion is not valid
3008 SAML bearer assertion is not valid or is no longer valid
3009 SAML is not activated or SAML bearer assertion provider is not activated


Explanations for "access" (access types):

A Dialog logon (SAP GUI)
B Background processing (batch)
C CPIC
F RFC (as of 4.6C: internal RFC)
R RFC (as of 4.6C: external RFC)
I RFC system call (internal SRFC)
S RFC system call ( [external]* SRFC) - *see SAP Note 2590963
U User switch (internal call)
H HTTP
u Restore session (ABAP class CL_USERINFO_DATA_BINDING)
" " API call (such as SUSR_CHECK_LOGON_DATA)
M SMTP authentication (MTA): Password check
P ABAP push channel (APC)/WebSockets
E Establishment of a shared memory area (internal call)
O AutoABAP (internal call)
T Server startup procedure (internal call)
V SAP start service (internal call)
J Java Virtual Machine (internal call)
W BGRFC watchdog (internal call)
G ABAP Resource Manager (internal call)
r RFC via WebSockets (external)
Y TRFC/QRFC/bgRFC (internal)

Explanations for "auth" (authentication types):

P Password-based authentication
T Logon ticket
t Assertion ticket
X Certificate-based logon (X.509, https)
S SNC (Secure Network Communication)
R Internal RFC or trusted system RFC
A Internal call via background processing, for example
E External authentication (PAS, SAML, ...)
U Inverse user switch (ABAP class CL_USER_POC)
s HTTP security session
2 SAML2
1 SAML1
o OAuth2
N SPNego
a APC session (WebSockets)
B SAML bearer
r Reentrance ticket
D OIDC logon
d OIDC bearer

List: CPIC error codes in SAP Systems # SAP Basis Administrator

 This note concerns error analysis in the network environment and CPIC return codes in particular.
The meaning of the return code numeric values is unclear.


  CPIC return codes (not SAP-specific)

  CM_OK                          0
  CM_ALLOCATE_FAILURE_NO_RETRY  1
  CM_ALLOCATE_FAILURE_RETRY      2
  CM_CONVERSATION_TYPE_MISMATCH  3
  CM_SECURITY_NOT_VALID          6
  CM_SYNC_LVL_NOT_SUPPORTED_PGM  8
  CM_TPN_NOT_RECOGNIZED          9
  CM_TP_NOT_AVAILABLE_NO_RETRY  10
  CM_TP_NOT_AVAILABLE_RETRY    11
  CM_DEALLOCATED_ABEND          17
  CM_DEALLOCATED_NORMAL        18
  CM_PARAMETER_ERROR            19
  CM_PRODUCT_SPECIFIC_ERROR    20
  CM_PROGRAM_ERROR_NO_TRUNC    21
  CM_PROGRAM_ERROR_PURGING      22
  CM_PROGRAM_ERROR_TRUNC        23
  CM_PROGRAM_PARAMETER_CHECK    24
  CM_PROGRAM_STATE_CHECK        25
  CM_RESOURCE_FAILURE_NO_RETRY  26
  CM_RESOURCE_FAILURE_RETRY    27
  CM_UNSUCCESSFUL              28
  CM_OPERATION_INCOMPLETE      35
  CM_SYSTEM_EVENT               36

Gateway error codes


  CPIC_ERROR                    221 Error in the CPIC interface
  CANT_GET_MEMORY              222 Memory bottleneck
  NI_READ_FAILED                223 Network read error
  NI_WRITE_FAILED              224 Network write error
  INVALID_REQUEST              225 Invalid request
  NOT_YET_CONNECTED            226 Not yet connected
  GW_WP_DIED                    227 Gateway process died
  SHM_READ_FAILED              228 Shared memory problem (read)
  SHM_WRITE_FAILED              229 Shared memory problem (write)
  NO_MORE_LU                    230 No available LU
  NO_MORE_WP                    231 No available gateway process
  CANT_START_WORKPROCESS        232 Error when starting the gateway process
  WRONG_COMM_TYPE              233 Wrong communication type
  CONNECT_FAILED                234 Connection setup failed
  COMM_TABLE_ERROR              235 Error in comm. table
  GW_CONNECT_FAILED            236 No connection to the gateway
  GW_DISCONNECTED              237 Connection to the gateway disconnected
  WRITE_TO_GW_FAILED            238 Error with GW comm. (write)
  READ_FROM_GW_FAILED          239 Error with GW comm. (read)
  INVALID_LEN                  240 Invalid length
  INVALID_ENVIRONMENT           241 Invalid environment
  GW_TIMEOUT                    242 Timeout
  GW_CONNECT_TO_R3              243 Error when setting up R/3 connection
  SYSTEM_DISCONNECTED          244 Partner disconnected connection
  MEM_OVERFLOW                  245 Memory overflow
  WRONG_APPCHDR_VERSION         246 Incorrect APPC header version
  GW_APPC_SERVER_DOWN          247 Loc. gateway not started
  TXCOM_TABLE_FAILED            248 Error when accessing TXCOM
  COMM_TABLE_OVERFLOW           249 Comm. table full
  C_NO_MEM                      450 No memory
  C_NO_SIDE_INFO                451 No SIDE INFO entry
  C_TP_START                    452 TP-START failed
  C_NO_INIT                    453 No initialization
  C_GETLU                      454 "getlu" failed
  C_SIGNAL                      455 "signal" failed
  C_TIMEOUT                    456 Timeout when establishing connection
  C_ALLC                        457 CMALLC failed
  C_SEND                        458 CMSEND failed
  C_PREPARE                    459 Prepare-To-Receive failed
  C_FLUSH                      460 CMFLUS failed
  C_RECEIVE                    461 CMRCV failed
  C_NO_ARGUMENT                462 Missing argument
  C_GET_ALLOCATE                463 "get_allocate" failed
  C_DEAL                        464 CMDEAL failed
  C_TP_END                      465 TP-END failed
  C_MAX_CONV                    466 Max. number conv. reached
  C_SNAOPEN                    467 "snaopen" failed
  C_SNACTL                      468 "snactl" failed
  C_NO_FLUSH                    469 No flush in IBM environment
  C_SNACLSE                    470 "snaclse" failed
  C_STATE_CHECK                471 Status error
  C_NO_SIDE_INFO_ENTRY          472 No side info entry
  C_NO_CONV                    473 No conversation
  C_MANUAL_CANCELD              474 Connection manually cancelled
  C_AUTO_CANCELD                475 Connection automatically cancelled
  C_NO_PARTNER                  476 No partner found
  C_CONFIRM                    477 Confirm failed
  C_CONFIRMED                  478 Confirmed failed
  C_NO_HOST_IN_SIDE_INFO        479 GWHOST not in side info entry
  C_NO_SERV_IN_SIDE_INFO        480 GWSERV not in side info entry
  C_NO_PROT_IN_SIDE_INFO        481 PROTOCOL not in side info entry
  C_NO_LU_IN_SIDE_INFO          482 LU not in side info entry
  C_NO_TP_IN_SIDE_INFO          483 TP not in side info entry
  C_NO_GATEWAY_CONNECTION       484 No connection to the gateway
  C_GETHOSTNAME                485 gethostname failed
  C_NO_SAP_CMACCP              486 SAP_CMACCP not executed
  C_NO_PROGRAM_NAME_ARG         487 Program not in arg. list
  C_NO_HOST_ARG                488 Host not in arg. list
  C_NO_SERV_ARG                489 Service not in arg. list
  C_NO_CONVID_ARG              490 Conv. ID not in arg. list
  C_ILLEGAL_PARAMETER           491 Illegal parameter
  C_LU62CVCT                    492 LU62CVCT failed
  C_LU62ATTACH                  493 LU62ATTCH failed
  C_NO_CONV_TABLE              494 No conv. table
  C_ILL_CONV_TABLE              495 Incorrect conv. table
  C_ILL_MOD_VALUES              496 Invalid conv. modification
  C_NIHOSTTOADDR                497 NiHostToAddr failed
  C_NIADDRTOHOST                498 NiAddrToHost failed
  C_THOST_FAILED                499 Reading table THOST failed
  INVALID_MODE                  630 Invalid mode number received
  MAX_NO_OF_GATEWAYS            631 Max. no. of gateways reached
  MISSING_LU_SPEC              632 No LU specified
  MAX_CPIC_CLIENTS              633 Max. no. of clients reached
  BAD_TPNAME                    634 Invalid TP name
  FORK_FAILED                  635 Fork failed
  BAD_NI_HANDLE                636 Invalid NI handle
  REXEC_FAILED                  637 rexec failed
  TP_START_FAILED              638 Starting the TPs failed
  NI_DG_SEND_FAILED            639 NiDgSend failed
  INTERNAL_ERROR                640 Internal error
  GW_HOST_UNKNOWN              664 Gateway host unknown
  GW_SERVICE_UNKNOWN            665 Gateway service unknown
  GW_NI_ERROR                  666 NI error
  GW_EXEC_FAILED                667 exec failed
  R2_RESTARTED                  668 R/2 restarted
  SYM_DEST_TOO_LONG            669 Symb. destination too long
  NO_MORE_SIDE_INFO_ENTRY       670 No more side info entries
  R3_LOGIN_FAILED              672 R/3 Login failed
  IMS_ERROR_PURGING            673 IMS error purging
  PENDING_TERM_OUTPUT          674 Timeout of reg. programs
  GW_SECURITY_ERROR            676 TP not registered
  GW_TIMEOUT_REG_PRGM          677 Timeout of registered program
  TP_REGISTERED                678 TP is registered
  TP_NOTREGISTERED              679 TP not registered
  TP_REG_SECU_ERROR            720 Security violation for reg. prgrms
  GW_SNC_DISABLED              721 SNC deactivated
  GW_SNC_REQUIRED              722 SNC required
  GW_SNC_NAME_NOT_SET           723 SNC name not defined
  GW_SNC_NAME_NO_DEFAULT        724 Default SNC name not permitted
  GW_SNC_PROT_NOT_SUPP          725 Log does not support SNC
  GW_R3_NOT_CONNECTED          726 No local R/3 system
  GW_SNC_REQUIRED_FOR_LU_TP     727 SNC required
  CONV_ID_NOT_FOUND            728 Conversation ID not found
  GW_SNC_SECURE_PORT            729 Comm. must make SNC
  GW_SNC_START_EXT_DIS          730 Start of ext. Program deactivated
  GW_SHUTDOWN                  731 Gateway was shut down
  GW_REM_PRGM_DISABLED          732 No external programs
  GW_STOLEN_CONVID              733 Conversation ID does not fit
  GW_NET_CONV_ERROR             734 Net Conv Error
  GW_MONITOR_DISABLED           735 Monitor not active
  GW_DUPLICATE_CVID            736 Conv. ID not unique
  GW_CONNECT_TIMEOUT            737 Timeout of connection setup to remote system
  GW_REQ_TO_DP_FAILED          738 Request could not be transferred to the dispatcher
  GW_CLIENT_ALREADY_DISC        739 Connection partner has already disconnected the connection
  GW_NO_HOST_IN_ROUTE           740 No hosts contained in route
..GW_ROUTE_CONNECT_DIS          741 Route already disconnected (no longer used)
  GW_CONN_IS_FREE              742 Connection already released
  GW_CONN_IS_DISC              743 Connection disconnected
  GW_REQBLK_ADM_ERROR           744 Error in request processing
  GW_BUFINFO_ERROR              745 Error in buffer handling
  GW_HDLINFO_ERROR              746 Error in network handling
  TP_REG_NOREG_ERROR            747 Number of registrations exceeded for this program
  TP_REG_ACCESS_DENIED          748 Access to registered server denied
  GW_PARAM_NOT_FOUND            749 Parameter could not be found
  GW_PRXY_ACCESS_DENIED         750 Gateway must not be used as a proxy
  GW_ACCEPT_TIMEOUT            751 Timeout when logging on (gw/accept_timeout)
  GW_WRONG_SERVER              752 not connected to the gateway
  C_NO_SIDE_INFO_GW            760 No side info file
  C_RECEIVE_WITH_PAR            761 CMRCV failed
  C_NO_SNC_LIB_ARG              762 SNC library not in arg. list
  C_NO_SNC_NAME_ARG            763 SNC name not in argument list
  C_SNC_INV_HANDLE              764 SNC invalid handle
  C_SNC_DISABLED                765 SNC deactivated
  C_SNC_ERROR                  766 General SNC error
  C_SNC_MODE_ON                767 SNC required
  C_SNC_NOT_AVAILABLE           768 SNC not available
  C_ILLEGAL_PARAMETER2          769 Invalid parameter
  C_AREA_TOO_SMALL              770 Memory area too small
  C_SNC_INV_STATE              771 Invalid SNC status
  C_RETURN_CODES                772 Error numbers
  C_REG_STATE_CHECK            773 Status violation during registration
  C_CPICTERR                    774 Error text for error number
  C_NO_SYMDEST                  775 No symbolic destination
  C_FUNCTION_NOT_SUPPORTED      776 Function not supported
  C_NET_CONV_ERROR              777 Conversion error when reading or writing data
  C_SIDEINFO_DISABLED          778 Access to side info file deactivated
  C_TIMEOUT_BLOCK              779 Timeout for blocking network call
  C_FAILOVER_ERROR              780 Error when communicating with failover software
  C_PROXY_ERROR                781 Error when communicating with the proxy server
  C_MPI_ERROR                  782 Error when communicating with memory pipes (Mpi)
  C_MTX_ERROR                  783 Error with lock management (mutex)
  C_CS_ERROR                    784 Error with lock management (critical section)

Friday, 18 October 2024

what is the difference between single-container and multi-container in HANA Databases?

The major difference between single-container and multi-container databases in SAP HANA lies in their architecture and management capabilities.

 Single-Container Databases

 A single-container database system consists of only one database instance managed by the SAP HANA database management system. This configuration includes:

 Single Database Instance: 

There is only one database that encompasses all the processes and memory structures needed for managing the database.

Simplicity: 

The single-container system provides a straightforward management approach, ideal for environments with fewer users and simpler data management needs.

 Limited Isolation: 

In this mode, multiple schemas can be managed, but all users and applications share the same database instance, which can lead to challenges in resource isolation and security.

 

Multi-Container Databases (Multitenant Database Containers - MDC)

 In contrast, multi-container databases allow multiple isolated tenant databases to exist within a single SAP HANA system. Key features include:

 Multiple Tenant Databases: 

Each database is fully isolated with its own users, catalog, resources, and data, enabling better security and resource management.

 Efficient Resource Utilization: 

All tenant databases share the same system resources (memory and CPU cores), but they are managed independently, allowing for flexible resource allocation.

 Improved Backup and Recovery: 

Users can perform backup and recovery operations at both the tenant and system levels, which provides greater flexibility and simplified maintenance.

Cloud-Based Applications: 

The architecture supports multi-tenant cloud applications more effectively, allowing different applications to run concurrently without affecting each other.

 Summary

 In summary, while single-container databases offer simplicity and are suited for smaller environments, multi-container databases provide enhanced security, resource utilization, and management flexibility, making them suitable for complex and cloud-based applications. The transition to multi-container architecture is considered advantageous for organizations necessitating improved scalability and efficient resource management.

 

Tuesday, 24 September 2024

How Data Compression in the Column Store in HANA database works and its details ? #SAP HANA

​Data compression in the SAP HANA database's column store significantly improves storage efficiency and performance.​ The method employs various compression techniques that allow for substantial reductions in data size, thus optimizing memory usage and processing times.

Using SAP column store tables, data compression can achieve ratios of up to 11 times, presenting a viable cost-saving solution for data storage in the HANA database. Such high compression ratios are critical in enhancing the performance of data-intensive applications.

The column store enables efficient compression of data, which reduces the costs associated with keeping data in main memory. This efficiency is essential for managing large datasets typical of enterprise applications.

Columnar data storage facilitates highly efficient compression primarily because most columns contain only a few distinct values compared to rows. This structure allows for targeted data access, improving overall read efficiency and query performance, particularly with extensive data sets3.

Administrators can monitor compression performance and trigger optimizations as needed. By understanding the compression ratios and structures in place, they can ensure that the database operates at peak efficiency, maintaining optimal space usage and improving access speeds.

Data in column tables can have a two-fold compression:

  • Dictionary compression

This default method of compression is applied to all columns. It involves the mapping of distinct column values to consecutive numbers, so that instead of the actual value being stored, the typically much smaller consecutive number is stored.

  • Advanced compression

Each column can be further compressed using different compression methods, namely prefix encoding, run length encoding (RLE), cluster encoding, sparse encoding, and indirect encoding. The SAP HANA database uses compression algorithms to determine which type of compression is most appropriate for a column. Columns with the PAGE LOADABLE attribute are compressed with the NBit algorithm only.

Compression is automatically calculated and optimized as part of the delta merge operation. If you create an empty column table, no compression is applied initially as the database cannot know which method is most appropriate. As you start to insert data into the table and the delta merge operation starts being executed at regular intervals, data compression is automatically (re)evaluated and optimized.

Automatic compression optimization is ensured by the parameter active in the optimize_compression section of the indexserver.ini configuration file. This parameter must have the value yes.

 

Cost Functions for Optimize Compression

The cost functions for optimize compression are in the optimize_compression section of the service configuration (e.g. indexserver.ini)

  • auto_decision_func - if triggered by MergeDog
  • smart_decision_func - if triggered by SmartMerge

 

Note: 

  1. Advanced compression is applied only to the main storage of column tables. As the delta storage is optimized for write operations, it has only dictionary compression applied.
  2. If the standard method for initiating a delta merge of the table is disabled (AUTO_MERGE_ON column in the system view TABLES is set to FALSE), automatic compression optimization is implicitly disabled as well. This is the case even if the AUTO_OPTIMIZE_COMPRESSION_ON column is set to TRUE in the system view TABLES. It is necessary to disable auto merge if the delta merge operation of the table is being controlled by a smart merge triggered by the application. For more information, see the section on merge motivations.

 

Monday, 23 September 2024

FAQ: SAP HANA Native Storage Extension (NSE)

1: How can I activate NSE?

NSE can be activated via DDL commands on the desired tables. There are no additional configuration steps needed.

2. I have activated NSE for a database object but want to revert my changes. How can I achieve this?

Changes to database objects can be reverted by setting the load granularity to default.

ALTER TABLE "<table_name>" DEFAULT LOADABLE;

Adding the option CASCADE allows to overwrite all modifications on the below layers:

ALTER TABLE "<table_name>" DEFAULT LOADABLE CASCADE;

 

3. Where do I find information about buffer cache usage?

Information on the buffer cache usage such as hit ratio and size can be obtained in the following locations:

  • M_BUFFER_CACHE_STATISTICS
  • M_BUFFER_CACHE_POOL_STATISTICS
  • SQL: "HANA_NSE_BufferCache" (SAP Note 1969700)
  • M_SQL_PLAN_CACHE (starting SAP HANA 2.0 SPS07, columns *_BUFFER_CACHE_IO_READ_SIZE, *_BUFFER_CACHE_PAGE_MISS_COUNT and *_BUFFER_CACHE_PAGE_HIT_COUNT)
  • SQL: "HANA_SQL_SQLCache_2.00.070+" (SAP Note 1969700)

4. How can I calculate the memory consumption of NSE enabled tables?

As explained in question #1, pages of NSE tables, when loaded into memory, are loaded into the buffer cache. Some helper structures for the columns are still loaded into the main store area of HANA and reside there until the column is unloaded.
After reading an NSE table, its non-pageable area will be loaded to the main store memory, while the pageable area will be loaded into the buffer cache. If, just after this read, you check the memory consumption for the table, the value will be the sum of main store memory and the memory occupied in the buffer cache. Keeping in mind the nature of the Buffer Cache, you will understand that the allocation in the buffer cache is transient. As soon as another NSE table is read, those pages loaded into the buffer cache may be displaced in a least-recently used fashion. Or they may stay there for longer, if there is enough space in the buffer cache to accommodate both tables. Given the transient characteristic of the pages loaded into the buffer cache, for the purpose of determining memory savings brought by NSE, this memory area should not be considered for the definition of the total memory allocated by the NSE table.

SQL: "HANA_Tables_ColumnStore_TableSize_2.00.030+" (SAP Note 1969700) reports the NSE-related memory consumption in column PAGED_GB.

5. Where is the buffer cache located?

The buffer cache is stored in the heap allocator Pool/CS/BufferPage. The allocator always resides in DRAM. This also applies to systems using PMEM or Fast Restart Option.

6. How does the buffer cache behave during memory pressure?

The NSE Buffer cache is not unloaded as part of shrink operations. As a result, the memory used by the NSE cache cannot be released and be reused by other components in case the database runs out of memory.

To avoid memory issues, the buffer cache should be configured with a reasonable size depending on the size of tables located in NSE.

7. How to handle NSE advisor recommendations to move columns like $trexexternalkey$ or $AWKEY_REFDOC$GJAHR$?

$trexexternalkey$ or $AWKEY_REFDOC$GJAHR$ are internal columns (SAP Note 1986747) that refer to the primary key respectively a multi-column index/concat attribute. Indexes can be moved to NSE as well.

a) identify index name:

SELECT SCHEMA_NAME, TABLE_NAME, INDEX_NAME, INDEX_TYPE, CONSTRAINT FROM INDEXES WHERE TABLE_NAME = '<table_name>';

b) change the index load granularity

ALTER INDEX "<schema>"."<index_name>" PAGE LOADABLE;example: ALTER INDEX "PLAYGROUND"."_SYS_TREE_CS_#170484_#0_#P0" PAGE LOADABLE;

 

8. How can I get information on the impact of NSE on query performance?

Starting SAP HANA 2.0 SPS07 monitoring view M_SQL_PLAN_CACHE contains several additional columns  that provide deeper insight on NSE usage during query execution (*_BUFFER_CACHE_IO_READ_SIZE, *_BUFFER_CACHE_PAGE_MISS_COUNT and *_BUFFER_CACHE_PAGE_HIT_COUNT).

The following sql commands in SAP Note 1969700 exist to support the investigation:

  • SQL: "HANA_SQL_SQLCache_2.00.070+" 
  • SQL: "HANA_SQL_SQLCache_TopLists_2.00.070+" 

9. Does DDIC support NSE?

Since S/4 HANA 2021, DDIC supports load unit settings "Page Preferred" and "Page Enforced". Depending on the S/4 HANA release, these settings can change.
The current configuration can either be checked via SE11 -> "Technical Settings" -> "DB-Specific Properties".
Alternatively, you can query the details directly in DBACOCKPIT via the following SQL:

select
    t.TABLE_NAME, t.LOAD_UNIT HDB_LOAD_UNIT,
    CASE WHEN d.LOAD_UNIT = 'P' THEN 'Page Preferred'
      WHEN d.LOAD_UNIT = 'Q' THEN 'Page Enforced' END DDIC_LOAD_UNIT
FROM
    TABLES t full outer join DD09L d ON t.table_name = d.tabname
WHERE
    (t.LOAD_UNIT = 'PAGE' OR d.LOAD_UNIT IN ('P', 'Q')) AND t.SCHEMA_NAME LIKE 'SAP%';

For further details on DDIC integration, please see SAP Note 2973243.

10. When are DDIC load unit settings applied and what happens if an upgrade changes the DDIC load unit?

Whether load unit settings are applied automatically depends on the scenario:

  1. When converting from anyDB to SAP S/4 HANA, the DDIC load unit is applied in the course of the migration, tables with "page preferred" and "page enforced" are created using NSE.
  2. fresh installation of S/4 HANA 2021+ is performed. The DDIC load unit is applied directly during installation.
  3. conversion from Suite on HANA to S/4 HANA or a S/4 HANA upgrade does not enforce DDIC load unit settings. In this scenario only exchange tables are created with the new settings. The load unit setting of existing tables will be preserved. Only a change to "Page Enforced" will result in a load unit change on the database. Other SAP_BASIS objects with load unit "Page Preferred" such as BALDAT, CDPOS, and EDID4 stay as is and will not be automatically changed.

11. Do I need additional disk space when using NSE?

During conversion to page loadable, the persistence structures are rewritten for optimal usage with NSE. In most cases, this does not change the total amount of disk space needed. However, in rare cases tables, partitions, and columns that are converted to NSE may see an increased disk usage up to 80 %. Some column data types (like TIMESTAMP, DATE, CHAR, VARCHAR and NVARCHAR) do not support all possible dictionary compressions if columns are stored as page loadable.

12. Is the buffer cache preloading data after a system restart just like the normal column store does?

No, the buffer cache will be empty after restart and only be populated on demand.

Note: 

Please refer to the SAP note to get full details. 

2799997 - FAQ: SAP HANA Native Storage Extension (NSE)

what is SAP HANA Native Storage Extention? #Data Archiving

SAP HANA 2 SPS 04 provides a new feature called SAP HANA Native Storage Extension (NSE). NSE is a disk-based extension to the in-memory COLUMN STORE in SAP HANA. Instead of loading a whole column, only the needed pages are loaded into the buffer cache when accessing the data.




















SAP HANA Native Storage Extension (NSE) is a general-purpose, built-in warm data store in SAP HANA that lets you manage less-frequently accessed data without fully loading it into memory. It integrates disk-based or flash-drive-based database technology with the SAP HANA in-memory database for an improved price-performance ratio. 

SAP HANA offers various software solutions to manage multi-temperature data (hot, warm, and cold), such as DRAMs for hot data and SAP HANA Extension Nodes, SAP HANA dynamic tiering for warm data, and SAP HANA Cold Data Tiering for cold data. 

Hot data is used to store mission-critical data for real-time processing and analytics. It is retained continuously in SAP HANA memory for fast performance and is located in the highest performance (and highest TCO) storage. 

Warm data is primarily used to store mostly read-only data that need not be accessed frequently. The data need not reside continuously in SAP HANA memory, but is still managed as a unified part of the SAP HANA database ― transactionally consistent with hot data, and participating in SAP HANA backup and system replication operations, and is stored in lower cost stores within SAP HANA. 

Cold data is used to store read-only data, with very infrequent access requirements. You manage cold data separately from the SAP HANA database, but you can still access it from SAP HANA using SAP HANA’s data federation capabilities. This image shows the difference between standard HANA in-memory storage and the storage offered with NSE:



























The capacity of a standard SAP HANA database is equal to the amount of hot data in memory. However, the capacity of a SAP HANA database with NSE is the amount of hot data in memory plus the amount of warm data on disk.

Since growth in data volume results in increased hardware costs, the ability to decouple data location from a fixed storage location (layer) is one of the key themes of a multi-temperature data storage strategy.

NSE is integrated with other SAP HANA functional layers, such as query optimizer, query execution engine, column store, and persistence layers. Key highlights of NSE are:

  • A substantial increase in SAP HANA data capacity, with good performance for high-data volumes.
  • The ability to co-exist with the SAP HANA in-memory column store, preserving SAP HANA memory performance.
  • An enhancement of existing in-market paging capabilities by supporting compression, dictionary support, and partitioning.
  • An intelligent buffer cache that manages memory pages in SAP HANA native storage extension column store tables.
  • The ability to monitor and manage buffer cache statistics via system views.
  • The ability to support any SAP HANA application.
  • A simple system landscape with high scalability that covers a large spectrum of data sizes.
  • An advisor that collects object access statistics and provides column store object load unit recommendations.

Note: 

  1. The NSE feature in SAP HANA does not require you to modify your applications.
  2. Although SAP HANA 2.0 calculates 10% of memory for the buffer cache by default, this memory is only reserved and not allocated. SAP HANA accesses 100% of its memory (including the 10% reserved for the buffer cache) if you are not using NSE.
  3. If there are page loadable tables in your current version of SAP HANA and you move to another, later, version, only those tables that were designated as page-loadable in the earlier version use the buffer cache in the later version (up to the limit that was calculated in the original version of SAP HANA you were running).

 

 

What is SAP Information Lifecycle Management (SAP ILM) ?

Information has a lifecycle. It is created, it lives within databases and systems, it changes, and it is archived and eventually deleted. With SAP Information Lifecycle Management (SAP ILM), companies can meet their data retention, data destruction, and system decommissioning requirements and obtain compliance with legal and regulatory mandates. As a result, SAP Information Lifecycle Management (SAP ILM) helps companies streamline their technical infrastructure, reduce IT costs, and improve IT risk and compliance management.

SAP Information Lifecycle Management (SAP ILM) is based on the following pillars:

 • Data archiving (active data and system):

  • Analyze data volumes
  • Securely move data from the database to the archive
  • Access archived data conveniently

• Retention management (end-of-life data):

  • Define and manage all retention policies across the enterprise
  • Manage the destruction of data responsibly based on policies
  • Enforce retention policies
  • Use secure information lifecycle management–aware storage (partner offerings)
  • Perform e-discovery and set legal holds

• System decommissioning (end-of-life system):

  • Decommission SAP and non-SAP legacy systems to a central retention warehouse 
  • Enforce retention policies on data from the shut-down system
  • Run reporting on data from the shut-down system (SAP Business Warehouse (SAP BW) and local reporting)
  • Use predefined business warehouse queries for reporting
  • Interpret and understand data in archives without the help of the original system

 

To learn more about SAP Information Lifecycle Management (SAP ILM), 

please contact your SAP representative, write to us at ilm@sap.com, or visit us on the Web at http://scn.sap.com/community/information-lifecycle-management.

Thursday, 19 September 2024

How to check SAP Application log Statistics with help of Program "SBAL_STATISTICS" ? # SAP Basis Administrator

To check SAP Application log statistics you can use the program "SBAL_STATISTICS," you can utilize the reporting capabilities it provides for analyzing and obtaining detailed statistical information about application logs generated by various applications within SAP. This program serves as a useful tool for effectively monitoring and managing log entries.


Based on the findings from the SBAL_STATISTICS report, administrators may decide to clean up old logs, adjust logging levels, or take corrective actions on applications based on the insights derived from the statistics of application logs.

Accessing the Program

You can access the program through Tcode "SA38", as shown below






Click on execute and select the statistic collection option as per requirement. 


STATISTIC COLLECTIONS

TIME CREATION: 

get the number of logs for every month in a year, how many logs are in each month

CLIENTS :

get the number of logs for every client, how many logs are in each client














OBJECTS, Subobject, messages :

detailed overview based on OBJECT and SUBOBJECT of logs

columns:

  • OBJECT, SUBOJECT (defined in TCODE SLG0)
  • MIN DATE - the oldest creation date ( highlighted field is older than 180 days)
  • MAX DATE - the latest creation date ( highlighted field is older than 180 days)
  • Logs Cnt - number of logs
  • MsgCnt All - number of messages
  • MsgCnt E, W, I, S, A - number of message with type Error, Warning, Information, Success, Cancel
  • reached exp date - number of logs they reached expiry date and can be deleted via option SLG2 - only logs that have reached their expiration date
  • and can be deleted - number of logs can be deleted via option SLG2 - And logs that can be deleted before the expiration date
  • cannot be deleted - number of logs cannot be deleted via option SLG2 - Non-deletable logs (display only). Such logs can be deleted only be the application that created the logs.
  • never expire - number of logs that expity date is set to 31/12/9999
  • % - number of % logs in current client.











USERS: 

get the number of logs for every user

PROGRAMS: 

get the number of logs for the program

TCODE: 

get the number of logs for TCODE

EXPIRATION DATE: 

get the number of logs for each option of Expiration Date (SLG2)

- Only logs that have reached their expiration date

- And logs that can be deleted before the expiration date

- Non-deletable logs (display only)


RELID data, BALDAT: 

get number records with different RELID in BALDAT

Statistic logs

to view data select Option and click on Execute. 






You can view log details under Details as shown below.





Documents referred : 2524124 - Application log Statistics

Tuesday, 17 September 2024

What's LOB and the importance in HANA database ?

 In the SAP HANA database, Large Objects (LOBs) are unstructured data types such as images, videos, or documents. The importance of LOBs in HANA, particularly regarding their disk size, is significant for performance, storage management, and application development.

1. Definition of LOBs

LOBs, or Large Objects, refer to unstructured data types that include items such as pictures, PDFs, and XML content1. They are characterized by their capability to be quite large, which necessitates special considerations in terms of storage and performance within databases like SAP HANA4.

2. Disk Size Limitation

In SAP HANA, the current maximum size for a LOB is 2 GB2. This limitation is critical as it influences how data is managed within the database and affects the strategies for data storage and retrieval employed by application developers.

3. Storage of LOBs in HANA

SAP HANA can store large binary objects (LOBs) such as images or videos on disk, rather than inside column or row structures in main memory3. This distinction is vital for performance optimization, as storing large objects on disk allows for more efficient use of memory resources in main operations.

4. Impact on Performance

The way LOBs are managed in SAP HANA—their storage on disk instead of main memory—has implications for startup times and takeover processes5. As LOBs can occupy considerable storage space, their management directly affects the overall efficiency of data access patterns in applications utilizing HANA.

5. Application Development Considerations

Understanding the size and nature of LOBs is essential for application developers working with SAP HANA6. They must consider these aspects when designing data models and structures to ensure optimal performance and adherence to size constraints for LOB storage.

How does SAP table "BALDAT" enhance the performance of the SAP system, and why is it important?

 #SAP Basis Administrator

The BALDAT table in SAP serves as a standard repository for logging application events.​ It is crucial for tracking, analyzing, and troubleshooting various activities within the SAP system. Understanding its technical details and significance will aid users in managing logs efficiently, enhancing system performance, and maintaining operational integrity.

1. Overview of the BALDAT Table

The BALDAT table, also referred to as "Application Log: Log Data," is a standard table used within SAP R/3 ERP systems to store detailed log information related to application processes. It provides a structured format for capturing critical data necessary for monitoring system activities and auditing changes made during transactions.

2. Technical Structure

The BALDAT table comprises several fields that define the data structure, including, but not limited to:

  • MANDANT (Client): Identifies the client within which the log entry is stored, fundamental for data organization and segregation across different business environments2.
  • RELID (Region in IMPORT/EXPORT Data Table): This field helps to indicate the region related to log entries, facilitating more refined data categorization2.
  • LOG_HANDLE (Application Log: Log Handle): A unique identifier for each log entry, ensuring traceability and reference2.
  • BLOCK (Internal Message Serial Number): Numbers log entries serially to assist in organizing log data2.
  • Error Tracking and Troubleshooting: It captures detailed error messages, warnings, and informational logs generated during transactions, allowing administrators to investigate issues effectively7. This capability is vital for diagnosing application performance and operational integrity.
  • Auditing and Security: Logs maintained in BALDAT can be leveraged for auditing purposes. They provide insights into user activities, changes made to data, and potential security violations, ensuring compliance with regulatory standards7.
  • Job Monitoring: The table records logs related to job scheduling and execution, allowing administrators to track the status and outcomes of automated tasks and background processes, enhancing overall system management7.

Additional fields track the length of user data, data types, and various parameters related to the logs.

3. Importance of BALDAT in SAP Systems

The BALDAT table holds significant importance in an SAP system for various reasons:

4. Log Management and Maintenance

Given that the application logs can grow exponentially, regular maintenance of the BALDAT table is critical. Tools like the SBAL_DELETE report can be utilized to automate the cleanup of outdated log entries, minimizing performance degradation due to excessive log data. Furthermore, administrators can analyze logs using transactions such as SLG2 to identify non-deletable logs and manage retention effectively.

Please refer to SAP Note: 

195157 - Application log: Deletion of logs

2524124 - Application log Statistics

5. Conclusion and Best Practices

In summary, the BALDAT table's structured logging capability and detailed recording of application activities make it an essential component of the SAP ecosystem. Regular monitoring and maintenance of this table, coupled with a proactive cleanup strategy, can significantly enhance system performance and operational compliance. Best practices include the periodic execution of log cleanup reports, adherence to logging policies, and the implementation of comprehensive log analysis procedures.

This comprehensive overview provides a clearer understanding of the technical specifications and significance of the BALDAT table within SAP systems.