Tuesday 28 April 2020

How to install HANA Database in non- Interactive Mode ?

In HANA System installation passwords are the mandatory parameter. There are three methods for configuring passwords while Hana system installation.


  • Interactive Mode
  • Command Line 
  • Configuration File 

The interactive installation is available for the SAP HANA database lifecycle manager in both graphical  (hdblcmgui ) and command-line interface (hdblcm). Passwords are entered manually one-by-one as they are requested by the installer. This method is preferred for quick, individual system installations.


There are two Non-interactive ways of HANA DB installation 

1: COMMAND LINE

Configuring passwords in the command line is a two-step process. 

 - First, a simple text file with passwords in XML syntax should be created and saved in the home directory of the root user. 

 - Then the file can be called using standard input and the read_password_from_stdin parameter in the command line with batch mode



Parameters specified in the command line override parameters specified in the configuration file. Since this method is the most powerful and flexible method, it is often the preferred method for installing multiple SAP HANA systems at one time.

The following is an example of the password file in XML syntax: 

Passwords.xml

<?xml version="1.0" encoding="UTF-8"?> 
<Passwords> 
<password>Welcome1234</password> 
<sapadm_password>Welcome1234</sapadm_password> <system_user_password>Welcome1234</system_user_password> <root_password>Root1234</root_password>
</Passwords> 


Now, the password file (stored in the root user's home directory) is called from the command line using standard input, the read_password_from_stdin=xml parameter, and batch mode: 

cat ~/Passwords.xml | ./hdblcm --sid=DB1 --number=42 -read_password_from_stdin=xml -b


2: CONFIGURATION FILE

It is possible to specify passwords in the configuration file.

A configuration file template is created with all the parameters set to their default values. The configuration file is edited to the preferred parameter values, then it is saved, and the values are read by the installer during installation. This method is preferred for a one-step installation that can be re-created several times. If passwords are specified in the configuration file, the configuration file should be stored in the home directory of the root user, for security reasons.
Example 

The following is an example of the configuration file, with configured password parameters: 

configfile1.cfg

# Root User Password 
root_password=Root1234 ... 

# SAP Host Agent (sapadm) Password 
sapadm_password=Welcome123 ...

# System Administrator Password 
password=Welcome123

 # Database User (SYSTEM) Password 
system_user_password=Welcome123




Now, the configuration file (stored in the root user's home directory) is called from the command line using the config file parameter:

 ./hdblcm --sid=DB1 --configfile=~/configfile1.cfg


Rupesh Chavan

Thanks 








Which are the users automatically created during the HANA installation and there use?

In this blog, we will see the users who get created during the installation of the HANA DB. it's simple but some time we don't know. I hope this small but important information will help you to build your knowledge.

Users Created During Installation

The following users are automatically created during the installation: 

1:    <sid>adm            ( like "hdbadm"  ) 

  1. The operating system administrator. 
  2.  The user <sid>adm is the operating system user required for administrative tasks such as starting and stopping the HANA system. 
  3. The user ID of the <sid>adm user is defined during the system installation. The user ID and group ID of this operating system user must be unique and identical on each host of a multiple-host system. 
  4. The password of the <sid>adm user is set during installation with the password parameter. 

2:       sapadm    

  1. The SAP host agent administrator. 
  2. If there is no SAP host agent available on the installation host, it is created during the installation along with the user sapadm.
  3. If the SAP host agent is already available on the installation host, it is not modified by the installer. The sapadm user and password are also not modified. 
  4. The password of the sapadm user is set during installation with the sapadm_password parameter.

3:       system 

  1. The database superuser. 
  2. Initially, the SYSTEM user has all system permissions. Additional permissions can be granted and revoked again, however, the initial permissions can never be revoked.
  3. The password of the SYSTEM user is set during installation with the system_user_password parameter.


Thanks 


Rupesh Chavan









Tuesday 21 April 2020

What is the difference between the SAP HANA appliance model and the tailored data center integration model (TDI) ?

Note: it is an important interview question. because you have done HANA Database installation but when the question comes whether the installation was the Appliance or TDI. We faced an issue.

Hence in this blog, we will see what is Appliance and TDI model of SAP HANA on-premise.

SAP HANA appliance model means preconfigured software and hardware bundled by an SAP hardware partner.  With the appliance, model SAP co-ordinates all support requests for all components of the system including hardware with the responsible partners.


SAP HANA tailored data center means model which allows you more flexibility when integrating your SAP HANA system with your existing storage solution. Tailored data center integration gives you the flexibility to install SAP HANA yourself on the same validated hardware as used for appliances but you are responsible for all aspects of the system, including managing support with all the involved partners. 



In the below table, we will see a few important differences 




























To get more details download doc from below link




What is the HANA appliance delivery model or SAP HANA Appliance ?

With SPS 07 you can decide to implement SAP HANA using the appliance delivery model, meaning preconfigured software and hardware bundled by an SAP hardware partner, or you can opt for the SAP HANA tailored data center integration approach, which allows you more flexibility when integrating your SAP HANA system with your existing storage solution.

SAP HANA Appliance 


SAP HANA comes as an appliance combining software components from SAP optimized on proven hardware provided by SAP’s hardware partners. This approach offers you well-defined hardware designed for the performance needs of an in-memory solution out of the box. The appliance delivery is the first choice if you are looking for a preconfigured hardware set-up and a preinstalled software package for a fast implementation done by your chosen hardware partner and fully supported by both, the partner and SAP

Now the question comes about role and responsibility. please refer below table to get more information about the same. 

































(*) The customer is generally responsible for the maintenance of the SAP HANA system. If the customer has a special support agreement with the hardware partner, maintenance may be the responsibility of the hardware partner.


(**) SAP is the main point of contact and distributes all issues within the support organization by default, as is the case for other SAP applications. If the customer has defined special support agreements with the hardware partner, the customer should contact the hardware partner directly in the case of obvious hardware or operating system issues. If no agreements have been made neither SAP nor the hardware partners are responsible for the installation, maintenance and possible adjustment of external software installed on the SAP HANA system.








Which books refer to became HANA administrator ?

There is a question about which topic needs to study to became HANA Administrator. This blog will help you to get an answer to your question


MAP and description given below will help you to understand the importance each book







1: The SAP HANA Technical Operations Manual gives you an overview of the system and extended landscape administration

2: The SAP HANA Administration Guide is divided into a number of major areas including: 

  • Core system administration (with the SAP HANA studio and SAP HANA cockpit) 
  • Security administration 
  • Business continuity topics like scalability and HA and DR 
  • Data provisioning 
  • SAP HANA Extended Services (XS) administration 
  • Administration with HDBSQL 

3: The online help for DBACockpit for SAP HANA describes how to use DBA Cockpit to configure, manage and monitor SAP systems and their databases.

4: The SAP HANA Troubleshooting and Performance Analysis Guide describes what steps you can take to troubleshoot and resolve specific performance issues of your SAP HANA database and what you can do to enhance the performance in general.
5: The SAP DB Control Center Guide describes how it uses SAP DB Control Center for aggregate monitoring and management, including starting and stopping managed systems and performing enterprise-wide health and alert monitoring for database products and their cockpits.

6: The Applications Operations Guide for SAP LT Replication Server describes how to administer and operate the SAP LT (Landscape Transformation) Replication Server for SAP HANA.



Rupesh Chavan

Thursday 16 April 2020

SAP HANA Data Provisioning and its types

What is Data Provisioning?


SAP HANA, data provisioning also refers to data exposure, where SAP HANA can consume data from external sources but not necessarily retain the data in SAP HANA. data provisioning with SAP HANA can consume data. Whether it will be retained by SAP HANA is dependant on the data provisioning tool you choose. Today’s business applications are powered by a rich variety of data types (transactional, spatial, text, graphics, and so on) and at various rates of consumption, from continuous, real-time sensor data to periodic batch loads of bulk data. SAP HANA can consume all types of data.


You cannot avoid referring to the acronym ETL when describing SAP HANA data provisioning. Various SAP HANA data provisioning tools can provide ETL capabilities to various extents. ETL stands for Extract, Transform, Load.

Extract

The first part of an ETL process involves extracting the data from the source systems. In many cases, this is the most challenging aspect of ETL, because extracting data correctly sets the stage for all subsequent processes. A good data provisioning tool should be able to extract data from any data sources, of any data type, at any time, and with good performance.

Transform

The transform stage applies a series of rules or functions to the data extracted from the source system to derive the data for loading into the target system. Depending on the project requirements, some data sources require very little data manipulation, if any, and some sources require significant cleaning and enhancement. Often data comes from many sources, so harmonization is also required to make sure that the loaded data appears as one in the target system.

Load

The load phase loads the data into the target system. The target system should be able to handle delta loads, where only the changes since the last time are loaded.

Methods of Data Provisioning for SAP HANA


At present, there are many different methods of data provisioning for SAP HANA.

These methods are as follows:

SLT — SAP LT Replication Server for SAP HANA

SLT works with both SAP and non-SAP source systems and supports the same databases that SAP supports for the SAP Business Suite (as they use the same database libraries).
These include SAP HANA, SAP ASE, SAP MaxDB, Microsoft SQL Server, IBM DB/2 (on all platforms), Oracle, and even the old Informix. The method that SLT uses for real-time replication is trigger-based. It creates triggers in the source systems on the tables it replicates. This could be a problem for some database administrators, for example, banks do not like triggers on their critical legacy production banking systems. In this case, you should instead look at the SAP Replication Server, which only reads the various log files, and has even less impact on the source systems.
The big advantage of SLT is that it can read and use pool and cluster tables from older SAP systems. In the past, due to database limitations, SAP had to use pool and cluster tables. These were tables within tables. As the ABAP data dictionary is platform and database independent, it is inherently different from the database data dictionary. By using this fact, SAP could create a single table (pool or cluster table) in the database, which would then unpack into many different and separate tables in the ABAP data dictionary. In ABAP, you would have 100 tables, but only one table in the database. If we read the database log file, like SAP Replication Server does, we might never find the ABAP table as the SAP system will “hide" it away inside the pool or cluster table.
As SLT uses an ABAP stack, it uses the ABAP data dictionary. As a result, it can read the contents of the pool and cluster tables. You can filter or perform a simple transformation of data as you load the data into SAP HANA.

SAP Data Services

If you have bad quality data going into your reporting system, you can expect bad quality data in your reports.
SAP Data Services is a good ETL tool. The “E" phase involves extracting data from the source systems. Data Services can read many data sources, even the obscure ones like old COBOL copybooks. It can also read SAP systems, even via extractors. It cannot, however, use all the SAP extractors. For example, if you need extractors that require activation afterward, then DXC would be a better tool to use.
In the “T" phase of ETL, Data Services can clean your data. There is no better SAP tool for doing this than Data Services. This eliminates the bad quality portion of your reporting results.
Finally, in the “L” phase of the ETL, Data Services load the data into SAP HANA.

SDA - Smart Data Access 

A few years ago people wanted to put all of their data into a single data warehouse-type environment to analyze it there. This is sometimes called a data lake and is a physical data warehouse.

There are a few problems with this:

• You have to duplicate all your data, so now your data is double the size.
• You have to copy all that data via the network, which clogs up and slows your network.
• You need more storage.
• You doubled your cost and effort.
• The data in the new data warehouse is never up to date, and as such the data is never consistent.
■ The single data warehouse system itself cannot handle all the different types of data requirements, such as unstructured data, graph-engine data, key-value pairs, spatial data, a huge variety of data, and so on.

SRS - SAP Replication Server -- 


The big focus of SAP Replication Server is on low-impact, real-time replication. SAP Replication Server works with both SAP and non-SAP source systems and supports the many databases. These include SAP ASE, Microsoft SQL Server, IBM DB/2, and Oracle. The method that SRS uses for real-time replication is log-based. It reads the log files of the source systems and has very little impact on these source systems

SDI - Smart Data Integration--

The same advantages and disadvantages of SRS apply to SDI. This is because SDI is similar to SAP Replication Server, implemented inside SAP HANA when using the log file adapters. You buy this as an additional feature for SAP HANA, and as such it has separate pricing.

SDQ - Smart Data Quality -- The same advantages and disadvantages of SAP Data Services apply to SDQ. This is because SDQ is essentially Data Services functionality implemented inside SAP HANA. 

SDS - Smart Data Streaming -- SDS is based on the technology of Sybase ESP (Event Stream Processor).

DXC - Direct Extractor Connection --
The big drive for DXC is when you want to use complex extractors from SAP systems to load data into SAP HANA. Especially when the extractors require activation, Data Services might not be able to use them. The main business case for this is when you would like to get some data from an SAP Business Suite system, such as financial data.

Flat files and Excel worksheets -- Flat files or Excel worksheets are the simplest way to provide data to SAP HANA.

Wednesday 15 April 2020

Sizing SAP In-Memory Database

Memory Sizing:

  1. Memory for column store
  2. Memory for row store
  3. Memory for caches and additional components

Disk Sizing

  1. Disk sizing for data files
  2. Disk sizing for log files

CPU sizing

Memory Sizing


Memorizing is the process of estimating, in advance, the amount of memory that will be required to run a certain workload on SAP HANA. To get an answer about how much memory requires. first, we need to understand the answer of the below given question. 

1. What is the size of the data tables that will be stored in SAP HANA? You may be able to estimate this based on the size of your existing data, but unless you precisely know the compression ratio of the existing data and the anticipated growth factor, this estimate may only be partially meaningful.

2. What is the expected compression ratio that SAP HANA will apply to these tables? The SAP HANA Column Store automatically uses a combination of various advanced compression algorithms (dictionary, LRE, sparse, and more) to best compress each table column separately. The achieved compression ratio depends on many factors, such as the nature of the data, its organization and data-types, the presence of repeated values, the number of indexes (SAP HANArequires fewer indexes), and more.

3. How much extra working memory will be required for DB operations and temporary computations? The amount of extra memory will somewhat depend on the size of the tables (larger tables will create larger intermediate result-tables in operations like joins), but even more on the expected workload in terms of the number of users and the concurrency and complexity of the analytical queries (each query needs its own workspace).

Monday 13 April 2020

short dump SPOOL_INTERNAL_ERROR

In the dump itself and in the syslog, the system issues the message "Spool full" or "Spool overflow".

Reason 


In the standard SAP system, the number of spool requests that can be created is limited to 32000. If you reach this limit, there are no more free numbers and the errors described above occur.

Solution


You can raise the upper limit for spool requests. As of Release 4.0, you can set the upper limit to anywhere between 2 and 31 numbers (previously 99,000). However, we recommend that you do not set the interval higher than 999,999 because the human user finds higher numbers difficult to process. (This is not a technical restriction; it concerns the handling only.)

Proceed as follows:

1. Log on to the system in client 000 and call transaction SNRO.

2. Select the object SPO_NUM and choose the following button: Number ranges.

3. On the next screen, choose: Change Intervals.

4. In the "To number" column, change the upper limit of interval 01 to 999,999, for example.
The size of the interval also determines the maximum number of spool requests that can exist in the system. To ensure that the system performance does not deteriorate, you must use the report RSPO0041 or RSPO1041 on a regular basis to delete spool requests that are no longer required. The number of spool requests that can be held "officially" in the system depends to a great extent on the capacity of the database and the database computer. Only the number of spool requests simultaneously held in the system is relevant, not the size of the number intervals.

The changes must be made in client 000 only and then apply for all clients; changes in other clients have no effect. You can use the report RSPO_SHOW_SPO_NUM to display the setting from client 000 in all clients.

You can use the spool number monitor in transaction RZ20 to specify threshold values in which the system must create an alert if a certain percentage of the spool numbers are allocated.



How should profile parameter "rspo/store_location" be used?

rspo/store_location, data storage, spool request

Solution


For all releases:

Profile parameter "rspo/store_location" controls where TemSe stores the spooler data.

Possible values:

db

(lower case):
TemSe saves the spooler data in the database. Table TST03 is used for this.
Advantages:
Data will be backed up.

G

(upper case):
TemSe saves the data in global files. To do this, TemSe creates subdirectories in the "global" directory. Details of the path name are controlled by profile parameter "rsts/files/root/G". Do NOT change these details.
Advantages:
It is generally much faster.
Disadvantages:
Data is not regularly backed up.

L / T

(upper case):
TemSe saves the data in local files. Because these files can then only be accessed from the same application server, this setting is a special case, and should not be used. (For more information, see: profile parameters "rsts/files/root/L" and "rsts/files/root/T".)
Advantages:
It is much faster.
Disadvantages:
Data is not backed up.
Generating, processing, and spool work process (among others) must all run on the same application server.


Space requirements of TemSe and spooler

Symptom


Keyword: TemSe, spooler
Assigned disk space is insufficient for files.
Unix error messages with "errno" 28 - "No space left on device".

Reason and Prerequisites


Less space was provided in the file system than was needed.

This SAP Note provides information about TemSe and spooler requirements.

First:
Space specifications always depend very much on the application profile, data volume and frequency of reorganization and cleanup actions.

TemSe stores datasets sequentially. At present, these are essentially background processing (BatchLog) logs and spool data.

Requirements for background processing logs:
Type: A very large number of small files
Location: /usr/sap/<SYSTEM> /SYS/global/???JOBLG/*
Delete: RSBTCDEL2
Measure: SP12 -> "TemSe - Administration of Temporary Sequential Data -> Memory allocation"

Requirements for spool requests:
Dependency: Only if profile setting rspo/store_location = G
Type: small and large files
Location: /usr/sap/<SYSTEM>/SYS/global/???SPOOL/*
Delete: RSPO0041 or RSPO1041
Measure: SP12 -> "TemSe - Administration of Temporary Sequential Data -> Memory allocation"

Requirements for output requests:
Dependency: Only during output process
Scope: like the largest lists to be printed
Type: few file
Location: /usr/sap/< SYSTEM>/<INSTANZ>/data/S*
and: /usr/spool/*/*
Delete: automatically after completion of output
Measure: with Unix resources

Caution:
Stored in particular at the location /usr/sap/<SYSTEM> /SYS/global/* is other data such as payment media, reorganization data, and sort data that can sometimes be VERY large.

Solution


+ Assign more disk space.
+ Do not run complex applications at the same time.
+ Run cleanup and deletion porgrams regularly.

Standard jobs, reorganization jobs of SAP


Job name:
Program
Variant
Repeat interval
SAP_REORG_JOBS
RSBTCDEL2
Yes
Daily
SAP_REORG_SPOOL
RSPO0041/1041
Yes
Daily
SAP_REORG_BATCHINPUT
RSBDCREO
Yes
Daily
SAP_REORG_ABAPDUMPS
RSSNAPDL
Yes
Daily
SAP_REORG_JOBSTATISTIC
RSBPSTDE
Yes
Monthly
SAP_COLLECTOR_FOR_JOBSTATISTIC
RSBPCOLL
No
Daily
SAP_COLLECTOR_FOR_PERFMONITOR
RSCOLL00
No
Hourly
SAP_COLLECTOR_FOR_NONE_R3_STAT
RSN3_STAT_
No
Hourly

COLLECTOR


SAP_REORG_PRIPARAMS
RSBTCPRIDEL,
Yes*
Monthly
SAP_REORG_XMILOG
RSXMILOGREORG
Yes
Weekly
SAP_CCMS_MONI_BATCH_DP
RSAL_BATCH_
No
Hourly

TOOL_DISPATCHING


SAP_SPOOL_CONSISTENCY_CHECK
RSPO1043
Yes
Daily
SAP_REORG_ORPHANED_JOBLOGS
RSTS0024
Yes
Weekly
SAP_CHECK_ACTIVE_JOBS
BTCAUX07
Yes
Hourly
SAP_DELETE_ORPHANED_IVARIS
BTC_DELETE_ORPHANED_IVARIS
Yes
Weekly
SAP_REORG_ORPHANED_TEMSE_FILES
RSTS0043
Yes
Weekly

Which SAP HANA services have a persistence?

Only a subset of SAP HANA services has the possibility to persist data to disk:


  1. computeserver
  2. dpserver
  3. indexserver
  4. nameserver
  5. standalone statistics server (SAP Note 2147247)
  6. scriptserver (SAP HANA <= 2.0 SPS 02, >= 2.0 SPS 04)
  7. xsengine


Other services only work in memory without persisting data to disk.

We can display services with persistence. with the help of  SQL: "HANA_Topology" which you can download from below given link of note 1969700.

https://launchpad.support.sap.com/#/notes/1969700

Tuesday 7 April 2020

How to change SQL output value of the HANA Studio

Many times we observed below log after running a select query, which has record count more than 1000. Query output getting restricted because of limitation of HANA Studio 







Its simple to resolve this issue. Please follow below few steps 



1: Open Hana Studio. 

2: Go to Window --> Preferences option as shown below 










3: New screen will open --> go to SAP HANA --> Runtime --> Result as shown below 
























we can see in the above screen default value is 1000. we changed it to 5000 then click on Apply and Close option. 

4: Now output limitation it 5000. Which we can see on the below screen. 

 




Note: we learned how to get more output but changing the value of the result option. please note that setting a value more than 1000 might impact your database performance because queries needs memory for processing and fetching output from the database


Rupesh Chavan



Thursday 2 April 2020

What is HANA

HANA: High-performance Analytic Appliance. The technical secret behind SAP HANA is that it’s different by design. It stores all data in-memory, in a compressed columnar format.

It is an important analytic tool that provides real-time data for analysis. other Analytic tools are :

1:spot file
2:Big data analytical ( Hadoop)
3:Cognos

Analytic tools main aim to provide data as possible real-time data for analysis. BWA ( business warehouse Accelerator)

HANA is a hybrid Database means ( Contain OLTP and OLAP ). it is in Memory Computing Database  ( IMCE).

in the HANA Database data store in the Primary database that is in RAM. whereas traditional database data is stored in the secondary database.



There are a few important features of HANA DB.

1: In-memory database

it means most of the database objects are loaded in memory. which saves request processing time. as in tradition database data store in Disk hence requirest flows

User --> CPU --> Memory --> Disk     and again from
Disk --> Memory --> CPU --> User.

where are in HANA as database store in memory means near to CPU. hence request process step reduce

User --> CPU --> Memory      and again from
Memory --> CPU --> User.

2: Parallel processing of request due to the high number multi-core CPU

3: Column table

Hana support columnar table for data storage which helps to make read operation faster ( mean analytical operation faster hence it called High-performance Analytic Appliance

4: Delta merge operation

5: Table partition
6: load and unload of the table from memory as per its use.


there few more feature those help to make Hana DB fast


Rupesh Chavan