LOCK OBJECTS IN SAP ABAP DICTIONARY

OVERVIEW

Lock objects are use in SAP to avoid the inconsistancy at the time of data is being insert/change into database.

SAP Provide three type of Lock objects.
  • Read Lock(Shared Locked)
protects read access to an object. The read lock allows other transactions read access but not write access to the locked area of the table
  • Write Lock(exclusive lock)
protects write access to an object. The write lock allows other transactions neither read nor write access to the locked area of the table.
  • Enhanced write lock (exclusive lock without cumulating)
works like a write lock except that the enhanced write lock also protects from further accesses from the same transaction.

You can create a lock on a object of SAP thorugh transaction SE11 and enter any meaningful name start with EZ Example EZTEST_LOCK.

Use:

you can see in almost all transaction when you are open an object in Change mode SAP could not allow to any other user to open the same object in change mode.

Example:

in HR when we are enter a personal number in master data maintainance screen SAP can't allow to any other user to use same personal number for changes.

Technicaly:

When you create a lock object System automatically creat two function module.

1. ENQUEUE_. to insert the object in a queue.
2. DEQUEUE_. To remove the object is being queued through above FM.

You have to use these function module in your program.

For sample program check in transaction /ABAPDOCU.

RELATED POST

AGGREGATE FUNCTIONS USAGE IN ABAP PROGRAMMS


IF KEYWORD

IMPORT PART ONE AND TWO

INCLUDE


INFOTYPES

Purchage order creation using BAPI

The following code explains the way of creating purchase order with BAPI.

&---------------------------------------------------------------------
*& Report ZBAPI_CREATE_PO *
*& *
&---------------------------------------------------------------------
REPORT ZBAPI_CREATE_PO .
&---------------------------------------------------------------------
*DATA DECLARATION
CONSTANTS : C_X VALUE 'X'.

*Structures to hold PO header data
DATA : HEADER LIKE BAPIMEPOHEADER ,
HEADERX LIKE BAPIMEPOHEADERX .

*Internal Tables to hold PO ITEM DATA
DATA : ITEM LIKE BAPIMEPOITEM OCCURS 0 WITH HEADER LINE,
ITEMX LIKE BAPIMEPOITEMX OCCURS 0 WITH HEADER LINE,

**Internal table to hold messages from BAPI call
RETURN LIKE BAPIRET2 OCCURS 0 WITH HEADER LINE.
data : w_header(40) value 'PO Header'.

data : ws_langu like sy-langu.

data: V_EXPPURCHASEORDER like BAPIMEPOHEADER-PO_NUMBER.
*text-001 = 'PO Header' - define as text element
selection-screen begin of block b1 with frame title text-001.
parameters : company like header-comp_code default '1000' ,
doctyp like HEADER-DOC_TYPE default 'NB' ,
cdate like HEADER-CREAT_DATE default sy-datum ,
vendor like HEADER-VENDOR default 'RAJ',
pur_org like HEADER-PURCH_ORG default '0001' ,
pur_grp like HEADER-PUR_GROUP default '001' .

selection-screen end of block b1.

selection-screen begin of block b2 with frame title text-002.
parameters : item_num like ITEM-PO_ITEM default '00020',
material like ITEM-MATERIAL default 'M-127' ,
plant like ITEM-PLANT default '0001' ,
quantity like ITEM-QUANTITY default 200.

selection-screen end of block b2.
*&---------------------------------------------------------------------
**START-OF-SELECTION.
*&---------------------------------------------------------------------
**DATA POPULATION
*&---------------------------------------------------------------------
ws_langu = sy-langu. "Language variable

*POPULATE HEADER DATA FOR PO
HEADER-COMP_CODE = company .
HEADER-DOC_TYPE = doctyp .
HEADER-CREAT_DATE = cdate .
HEADER-VENDOR = vendor .
HEADER-LANGU = ws_langu .
HEADER-PURCH_ORG = pur_org .
HEADER-PUR_GROUP = pur_grp .

&---------------------------------------------------------------------
*POPULATE HEADER FLAG.
&---------------------------------------------------------------------
HEADERX-comp_code = c_x.
HEADERX-doc_type = c_x.
HEADERX-creat_date = c_x.
HEADERX-vendor = c_x.
HEADERX-langu = c_x.
HEADERX-purch_org = c_x.
HEADERX-pur_group = c_x.
*HEADERX-doc_date = c_x.
&---------------------------------------------------------------------
*POPULATE ITEM DATA.
&---------------------------------------------------------------------
ITEM-PO_ITEM = item_num.
ITEM-MATERIAL = material.
ITEM-PLANT = plant.
ITEM-QUANTITY = quantity.
APPEND ITEM.
&---------------------------------------------------------------------
*POPULATE ITEM FLAG TABLE
&---------------------------------------------------------------------
ITEMX-PO_ITEM = item_num.
ITEMX-MATERIAL = C_X.
ITEMX-PLANT = C_X .
*ITEMX-STGE_LOC = C_X .
ITEMX-QUANTITY = C_X .
*ITEMX-TAX_CODE = C_X .
*ITEMX-ITEM_CAT = C_X .
*ITEMX-ACCTASSCAT = C_X .
APPEND ITEMX.
&---------------------------------------------------------------------
*BAPI CALL
&---------------------------------------------------------------------
CALL FUNCTION 'BAPI_PO_CREATE1'
EXPORTING
POHEADER = HEADER
POHEADERX = HEADERX

POADDRVENDOR =
TESTRUN =
IMPORTING
EXPPURCHASEORDER = V_EXPPURCHASEORDER

EXPHEADER =
EXPPOEXPIMPHEADER =
TABLES
RETURN = RETURN
POITEM = ITEM
POITEMX = ITEMX.

&---------------------------------------------------------------------
*Confirm the document creation by calling database COMMIT
&---------------------------------------------------------------------
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
EXPORTING
WAIT = 'X'

IMPORTING
RETURN =
.
end-of-selection.
****&---------------------------------------------------------------------*
****Output the messages returned from BAPI call
****&---------------------------------------------------------------------*
*LOOP AT RETURN.
WRITE / v_EXPPURCHASEORDER.
*ENDLOOP.

RELATED POST

FINDING BADI IN SAP ABAP

BAPI CODE


FREE

GENERATE

GET PART ONE AND TWO THREE

GET CURSOR PART ONE TWO

GET CURSOR PART ONE AND TWO

BADI Finding in ABAP

u can find BADI's in different ways...

1>First go to any transaction->iN THE menu bar SYSTEM->STATUS->Get the program name ->double click->u will go to the program attached to the tcode.Now search term will be CALL CL_EXITHANDLER.Now u will get list of BADI'S available..

2>Goto SE24->Give class name as CL_EXITHANDLER->Display->double click on get_instance mathod->Now u will go inside the method->Now put break point on the cl_exithandler.Now go to any transaction code and pass dat..U will see that it will be stopped on the break point which u set on the cl_exithandler...In the exit name u can find list of badi's attached to the tcode..

There are multiple ways of searching for BADI.

• Finding BADI Using CL_EXITHANDLER=>GET_INSTANCE
• Finding BADI Using SQL Trace (TCODE-ST05).
• Finding BADI Using Repository Information System (TCODE- SE84).

1. Go to the Transaction, for which we want to find the BADI, take the example of Transaction VD02. Click on System->Status. Double click on the program name. Once inside the program search for ‘CL_EXITHANDLER=>GET_INSTANCE’.
Make sure the radio button “In main program” is checked. A list of all the programs with call to the BADI’s will be listed.
The export parameter ‘EXIT_NAME’ for the method GET_INSTANCE of class CL_EXITHANDLER will have the user exit assigned to it. The changing parameter ‘INSTANCE’ will have the interface assigned to it. Double click on the method to enter the source code.Definition of Instance would give you the Interface name.

2. Start transaction ST05 (Performance Analysis).
Set flag field "Buffer trace"
Remark: We need to trace also the buffer calls, because BADI database tables are buffered. (Especially view V_EXT_IMP and V_EXT_ACT)
Push the button "Activate Trace". Start transaction VA02 in a new GUI session. Go back to the Performance trace session.
Push the button "Deactivate Trace".
Push the button "Display Trace".
The popup screen "Set Restrictions for Displaying Trace" appears.
Now, filter the trace on Objects:
• V_EXT_IMP
• V_EXT_ACT

Push button "Multiple selections" button behind field Objects
Fill: V_EXT_IMP and V_EXT_ACT

All the interface class names of view V_EXT_IMP start with IF_EX_. This is the standard SAP prefix for BADI class interfaces. The BADI name is after the IF_EX_.
So the BADI name of IF_EX_CUSTOMER_ADD_DATA is CUSTOMER_ADD_DATA

3. Go to “Maintain Transaction” (TCODE- SE93).
Enter the Transaction VD02 for which you want to find BADI.
Click on the Display push buttons.
Get the Package Name. (Package VS in this case)

Go to TCode: SE84->Enhancements->Business Add-inns->Definition
Enter the Package Name and Execute.

Here you get a list of all the Enhancement BADI’s for the given package MB.
The simplese way for finding BADI is
1. chooes Tcode Program & package for that Tcode.
2. Go to Tcode se18
3. Press F4
4. search by package or by program.

RELATED POST

IDOC'S AN INTRODUCTION
FORMFOR SAP ABAP

FORM PART TWOFOR SAP ABAP

FORMATFOR SAP ABAP

FORMAT PART TWOFOR SAP ABAP

FORMAT PART THREEFOR SAP ABAP


SAP definition,full form,over veiw and introduction

SAP Full form of planning and distribution of goods
SAP full form for mrp,sales and materiel planning
What is SAP and Why do we are in need of It
What is SAP Full form and its definition part one

ALE IDOC'S AN INTRODUCTION

ALE stands for Application Link Enabling and is a remote connection technology allowing the decentralization of business processes by connecting decentralized systems together.

Using ALE, it is fairly easy to synchronize several SAP systems so that they contain the same data objects at anytime. These objects may be master data (customers, vendors, GL accounts, cost centers, ...) or transaction data (FI documents, purchase orders, ...). To enable the synchronization, ALE supports not only mass transfer of data between systems but also selective data transfers of objects changed since the last transfer.

How does it work?

Viewed from a high level perspective, the process is straight and simple: a sender system selects the data that needs to be distributed, packs it in a standard format and sends it to one or several receiving systems. When a receiving system gets the data, it unpacks the standard format and records the data.

In fact, 3 layers are involved in this process:

An application layer which selects and records data in R/3.
A distribution layer which filters and converts data.
A communication layer which ensures the actual communication of records generated in a standard format.

The senders and receivers are determined thanks to a so-called distribution model which defines the transfer rules (who sends what to who?). The definition of the distribution model must be known by all systems involved (either as sender or receiver) and must consequently exist on all those systems.

The Intermediate Document (IDoc)

The data transferred using ALE must have a SAP standard format to be understood from all partners in the communication. This format is the Intermediate Document (IDoc) which represents an intermediary structure between raw SAP data and EDI formats. This structure is not constant, it depends on the data to be transferred and SAP provides the structures for most SAP standard objects such as customers master data, sales orders, ...

An IDoc with a specific structure is said to have a specific type. The IDoc type is nothing more than a version of a specific IDoc structure designed to contain data for a specific object type. For example, the IDoc type DEBMAS05 is the fifth version of a structure that represents a customer master data. The management of versions for IDocs structures is necessary because the objects evolve with time and tend to become more and more complex with new data fields or tables being added regularly.

The conversion of raw data forth (for the sender system) and back (for the receiver system) to the IDoc format is also ensured by SAP standard function modules as long as you transfer standard objects. For non standard objects (enhancements), you must define your own IDoc structure and write your own conversion function modules.

RFC connections

The data communication between the SAP servers may be done by several ways. The 2 most common ways are the transactional RFC and the file. The RFC connection is used to make a direct connection between servers while the file interface speaks for itself, it uses a file containing the IDocs from the sender system as input in the receiver(s) system(s).

The selection of a communication method is made though the ports configuration as we will see in the next section. For the purpose of this article, we are going to choose the most efficient method: the transactional RFC method. To set it up, we first need to define the RFC destinations of the partner systems.

An RFC destination may be seen as a set of settings necessary to connect to a system using the RFC protocol. These settings include the address and type of the partner system along with connection information such as the user ID and password to use.

The RFC destinations of all partners systems must be defined on all systems to include in the distribution model. The transaction to use for this purpose is SM59.

Definition of the communication ports

The communication ports determine the type of data communication that must occur between systems. When the communication is to use the RFC protocol, transactional RFC ports must be created specifying the RFC destinations of the partner systems. Use transaction WE21 for this purpose.

Partners profiles

In SAP, all partners systems involved in a distribution model have a profile. There exist several profile types such as customers profiles, vendors profiles, ... but this distinction between profiles is generally not necessary and you will create in most cases your partners profiles using a generic Logical System type.

Before entering the settings of the logical systems partners, you have to create them in customizing. Also, each system in the distribution model must be assigned in its own system its own logical system that represents it. This assignment is done at client level and not at system level.

This is not easy to explain nor to understand so let's take a simple example. Let's consider we have a simple distribution model made up of a sender system (S01) and a receiver system (R01). We need to transfer data from S01 / client 100 to R01 / client 200. In both the systems, we will define the logical systems S01_100 and R01_200.

But in sender system S01, the logical system S01_100 will be assigned to the client 100 while in the receiver system R01, it will be R01_200 that will be assigned to client 200. With such a configuration you see it is even possible to transfer data between two clients from the same system.

A partner profile is used to determine a lot of important settings that will be involved in the data transfer. These settings vary depending on the role of the partner system (sender / receiver) and are defined per message type. A message type is more or less a version independent IDoc type. It is mainly a convenient way of defining settings and identifying IDocs using a criterium independent of the partner system.

For a sender partner system (inbound parameters are filled in), following important settings are set per message type in the partner profile:

A process code used to indicate which function module will be used to convert the IDoc data to SAP data.

The time of input of the IDoc: as soon as the IDoc is created in the system or on request (using program RBDAPP01).

The post processing agent who will have to treat the data input errors if need be. The post processing agent may be either a user or any other HR organizational unit.

For a receiver partner system (outbound parameters are filled in), following settings are specified in the partner profile:

The receiver port to which the data will be sent.

The sending method: one IDoc at a time or by packets.

The IDoc type that will be sent to that partner. For a given message type, the IDoc type sent may vary depending on the receiver system. Indeed you may have different versions of SAP in your system landscape.

Definition of the distribution model and data filtering


We have already seen that the distribution model is used to decide who sends what to who. But, as you guessed, there is a little bit more to be said about it and the way to manage it.

The distribution model is maintained in a central transaction (BD64) which lets you access the most useful environmental settings necessary to set up the model. Before creating the model, you must first decide on which system you are going to create it. Indeed the model must exist in all partners systems and two possibilities exist to achieve this.

Either you create the model on each system manually (with the same model technical name), which is feasible only with few systems and/or simple models. Either you create it in a specific system (usually the source system) and distribute it to the other systems thanks to the distribution command in the BD64 transaction menu.

If we want to copy the customers master data from the production system to the other systems in the maintenance line for example, we may create the distribution model on the production system and distribute it to all partners systems.

The actual creation of the model requests that you mention a technical name for the model (unique identifier in the systems landscape), a sender system, a receiver system and message types to exchange between those systems. You may afterwards add more sending and receiving systems in the model as well as more message types.

For each message type defined between a sender and a receiver, you may set filters on specific fields of the message type so that IDocs be generated only when these specific values are matched. For example, you may decide to send only customers of a specific account group. These filters are set by selecting the 'no filter set' text under message types in BD64.

There exists also another kind of filters which are segments filters. They do not filter IDocs creation based on values like we have just seen but filter unconditionally segments of created IDocs which must be excluded from the transfer (transaction BD56).

This article will help you understand the basics of ALE and IDocs via a simple do-it-yourself example. We will create a custom IDoc in one SAP system and then post some business data through it to another SAP system. Business data will be picked up from custom data dictionary tables.

ALE – Application Link Enabling is a mechanism by which SAP systems communicate with each other and with non-SAP EDI subsystems. Thus it helps integration of distributed systems. It supports fail-safe delivery which implies that sender system does not have to worry about message not reaching the source due to unavoidable situations. ALE can be used for migration and maintenance of master data as well as for exchanging transactional data.

The messages that are exchanged are in the form of IDocs or Intermediate Documents. IDocs act like a container or envelope for the application data. An IDOC is created as a result of execution of an Outbound ALE. In an Inbound ALE an IDOC serves as an input to create application document. In the SAP system IDocs are stored in the database tables. They can be used for SAP to SAP and SAP to non-SAP process communication as long as the participating processes can understand the syntax and semantics of the data. Complete documentation on IDOC is obtained by using transaction WE60.

Every IDoc has exactly one control record along with a number of data records and status records. Control record has the details of sender/receiver and other control information. Data records contain the actual business data to be exchanged while the status records are attached to IDoc throughout the process as the IDoc moves from one step to other.

RELATED POST

PERFORMANCE TRACE IN ABAP


EXEC

EXPORT PART ONE

EXPORT PART TWO

FETCH

FIELD SYMBOLS

PERFORMACE TRACE FOR ABAP

ABAP DISPLAY

To access the ABAP code that triggered the issue of the SQL statement, from the trace list screen, use the ABAP Display button or choose Trace ® ABAP Display from the menu. The ABAP coding is displayed for the SQL statement on which your cursor is currently positioned in the trace list. Explain SQL is only possible for the database operations PREPARE, OPEN, and REOPEN.

The SQL statement in the ABAP code, which is normally formulated in OPEN SQL, may differ considerably from the SQL statement processed by the database. The following list indicates how the R/3 database interface converts certain SQL terms:

Ÿ A WHERE clause is used for each SQL statement that accesses a client-dependent table (except for SELECT … CLIENT SPECIFIED)

Ÿ SELECT … FOR ALL ENTRIES => Several SELECT statements with corresponding OR clauses (details later)

Ÿ MODIFY => INSERT or UPDATE

Ÿ Accessing pooled and cluster tables

Ÿ EXPORT/IMPORT TO/FROM DATABASE => INSERT/SELECT on INDX-like tables

OPEN CURSOR/FETCH NEXT => No equivalent in the SQL trace or/ equivalent SELECT .



TRACE SUMMARY:

In the trace list, choose Goto ® Summary to display a summarized form of the list. The list shows the SQL operations, the number of affected data records, and the processing time required.

The summarized list is particularly useful for identifying loop constructs. The list can be compressed even further to show the most frequently accessed tables by using the Summarize button.

In the trace list, choose Goto ® Statement Summary to find out how many structure-identical SQL statements there are, and which tables or views they access (a structure-identical strategy is an identical access strategy with different variables). The displayed list is sorted by the time taken with time indicated in microseconds.

The list also indicates the percentage of value-identical accesses (identical SELECTs) and the total number of returned data records. You should avoid all value-identical accesses.

From the trace list, choose Goto ® Identical Selects to see an overview of the value-identical SQL statements in a trace (a value identical statement has the same access strategy and the same variables).

Identical SELECTS deliver an identical quantity of data records. Therefore, it makes sense to buffer the results in, for example, an internal table of the calling ABAP program.

From the number of executions of SQL statements that are similar in value, you can determine the optimization potential. For example, if there are 4 value-similar executions on table VVBAK, buffering the resulting data in an internal table makes 3 executions unnecessary. This represents an optimization potential for this SQL statement of between 60% and 75%.


The goal of using an SQL Performance Trace is to find SQL statements with a high optimization potential. Use three user sessions. One user session is for the trace list, one for the compressed summary, and one for identical selects.

From the trace list, you can access Explain SQL or the ABAP code. An expensive SQL statement is indicated when a database operation takes longer than 200, 000 microseconds or requires more than 10 FETCHes.

In addition, a series of SQL statements that are similar in structure usually indicate nesting that can be optimized considerably. If the structure-identical SQL statements in a loop or nesting require in total more than 200, 000 microseconds, they can be regarded as expensive.

The display of identical SELECTs tells you which of the statements are similar in value and are, therefore being executed unnecessarily. You should always avoid unnecessary execution of SQL statements.

BADI and its advantage in ABAP

SAP Business Add-Ins (BAdIs) are one of the most important technologies used to adapt SAP software to specific requirements. BAdIs were introduced with Release 4.6 and replace function module exits. This technology is not limited to SAP applications. BAdI calls can be integrated in customer applications.

These can then be enhanced by other customer applications. In the various SAP applications, BAdI calls are implemented at places where enhancements are appropriate.Business add-ins are enhancements to the standard version of the system. They can be inserted into the SAP System to accommodate user requirements too specific to be included in the standard delivery. Since specific industries often require special functions, SAP allows you to predefine these points in your software.

As with customer exits, two different views are available:

· In the definition view, an application programmer defines exit points in a source that allow specific industry sectors, partners, and customers to attach additional coding to standard SAP source code, without having to modify the original object.

· In the implementation view, the users of Business Add-Ins can customize the logic they need or use a standard solution, if one is available.

In contrast to customer exits, Business Add-Ins no longer assume a two-level infrastructure (SAP and customer solutions), but instead allow for a multi-level system landscape (SAP, country-specific versions, industry solutions, partner, customer, and so on).

You can create definitions and implementations of Business Add-Ins at any level of the system landscape.SAP guarantees the upward compatibility of all Business Add-In interfaces. Release upgrades do not affect enhancement calls from within the standard software nor do they affect the validity of call interfaces. You do not have to register Business Add-Ins in SSCR.

The Business Add-In enhancement technique differentiates between enhancements that can only be implemented once and enhancements that can be used actively by any number of customers at the same time. In addition, Business Add-Ins can be defined according to filter values. This allows you to differentiate between Add-In implementations using the filter Country or other criteria.

he enhancement technique is set up in such a way that it is possible to define interfaces for ABAP soure code, screens, GUI interfaces, and tables. These allow customers to include their own enhancements in the standard. A single Business Add-In contains all of the interfaces necessary to implement a specific task.

BADI is just an object-oriented version of user-exit. Instead of entering program code into some function module (as in customer-exit), you define some class which has to implement predefined methods and those methods are fired at predefined points just like an old user-exit. Some BADI can have multiple independent implementations which is much better for software deployment as several developers can implement the same BADI independently. BADI/UserExists are used to enhance R/3 For customer Needs.

Each BAdI has a definition and more than one implementation. The definition means the methods(in class concept) that are used for performing various functions. The BAdI definition can be viewed in SE18 transaction(for standard ones) and user-defined BAdIs can be created in the same transaction as well.

When you create a BAdI definition, an class interface will be automatically created and you can define your methods in the interface. The implementation of the methods can be done in SE19 transaction .

RELATED POST

SQL PERFORMANCE TRACE PART TWO FOR SAP ABAP
DEFINE

DELETE PART ONE SAP ABAP

DELETE PART TWOSAP ABAP

DELETE PART THREESAP ABAP

DELETE PART FOURSAP ABAP

DELETE PART FIVESAP ABAP

Aggregate Functions usage in ABAP Program

n You can use aggregate functions (COUNT, SUM, MAX, MIN, AVG) to perform calculations using the DBMS. Only use aggregate functions if they significantly reduce the quantity of data records to be transferred.


n Aggregate functions are usually used used with a GROUP BY clause. SQL statements that contain aggregate functions bypass R/3 table buffers. (Note that this can be a disadvantage if tables are buffered.)

n Field values are processed differently, depending on whether they are in ABAP or at database level. The database uses a different method than ABAP for rounding data (for example, when converting F to I). The database recognizes the value NULL. ABAP does not recognize the value NULL.

For the aggregate function AVG, use data type F (number with floating point) for the target field. For the aggregate function SUM, make the data type of the target field longer than the data type of the source field, to allow for overflow.

Using Having Clause in a sql query:

n To apply a logical condition to the groups defined in the GROUP BY clause, use HAVING with a SELECT statement. The database determines the resulting quantity, that is, the data quantity to be transferred depends on the selectivity of the HAVING clause.

n Using HAVING clauses is similar to using aggregate functions. HAVING clauses create an extra load on the database. Therefore, only use HAVING clauses if they significantly reduce the quantity of data records to be transferred. Before you use a HAVING clause, check whether the condition can be specified in the SELECT clause instead.

Using Array Fetch:

n For SELECT .. ENDSELECT and ARRAY FETCH, the same number of data records must be transferred from the database server to the application server. Since data records are transferred in 32 KB blocks, the transfer load (that is, the number of fetches) is the same for both variants.

n However, an ARRAY FETCH is preferable to a SELECT ... ENDSELECT loop, because data records are transferred per record from the database interface to the ABAP program in a SELECT .. ENDSELECT. Whereas, in an ARRAY FETCH, the data records are transferred to the ABAP program in a block.

Using up to n rows in sql query:

n There are two ways to read a fixed number of data records. You can transfer the data records to the application server using SELECT … ENDSELECT and then determine if the required number of rows has been read. However, using SELECT … UP TO n ROWS is preferable, because only the requested number of data records are transferred from the database to the application.


n In the second version, the UP TO n ROWS addition ensures that only the required data quantity is read from the database and transferred to the database interface.

68

SQL Performance trace part two

The trace list is the same regardless of the database system. Data operations that require more than 100 milliseconds are marked red. Each section of the list has a header identifying the user, transaction, R/3 work process number, and R/3 client ID. The expanded trace list has the following columns:

hh:mm:ss:ms Time stamp indicating the begin of sending the database request (hour:minute:second:millisecond)

Duration Duration of the database operation in microseconds (varies according to system load)

Program Name of the program calling the SQL statement

Object Name of the table or view, as taken from the SQL statement

Oper Database operation to be executed

Curs Name of the database cursor

ArrSz Size of packets containing the data records sent from the database server to the application server

Rec Number of data records sent by the database operation

RC Return code of the database system

Statement The text of the SQL statement


DATA BASE OPERATIONS:

During the processing of an SQL statement performing a read access, the following operations occur:

PREPARE: This operation parses the SQL statement and converts it to a statement that can be processed by the database. In particular, the access strategy is determined. Placeholders are used for variables in the SQL statement.

OPEN/REOPEN: This operation replaces the placeholders in the SQL statement with concrete values, and opens the database cursor.

FETCH: This operation transfers the database records from the database server to the application server. The number of records transferred per FETCH is given by dividing the length of the data record to be read in bytes by the value of dbs/io_buf_size, which, by default, is set to 33 792 bytes.

An SQL statement performing a write access is processed in the same way, except that EXEC/REEXEC is used instead of OPEN/REOPEN and FETCH.

Database cursors per R/3 work process are buffered in the SAP cursor cache. This enables the PREPARE operation to be omitted for repeated executions of similar SQL statements. Displacement occurs in the SAP cursor cache.

EXPLAINING SQL USAGE IN A REPORT:

To access Explain SQL from the trace list, choose Trace ® Explain SQL. Alternatively, use the Explain SQL button. Explain SQL is performed for the SQL statement on which your cursor is currently positioned in the trace list. Explain SQL is only possible for the database operations PREPARE, OPEN, and REOPEN.

Double-click the table frame to display a dialog box containing statistical information on the table and information on all indexes of the table. Double-click an index to display statistical information on it.

Use the Analyze button to refresh the statistical information on a table and its indexes. To enable the cost-based optimizer to function effectively, the statistical information it uses must be updated after each change to a table or its indexes. Use the Index Statistics button to display an overview of the statistical information across all indexes.

16

RELATED POST

SQL PERFORMANCE TRACE PART ONE

SAP ABAPCONSTANTS

SAP ABAPTABLE CONTROL

SAP ABAPCONVERT

SAP ABAPCREATE

SAP ABAPDATA PART ONE


Accessing individual tables in abap dictionary

SELECT FOR ALL ENTRIES:


With SELECT FOR ALL ENTRIES, you can divide SELECT statements into loops. This variant enables you to create a JOIN between an internal table and a database table. In the example above, only the records from table KKNA1 are read that have customer numbers in the internal table g_itab_vvbak.

The following problems occur with SELECT FOR ALL ENTRIES:

You cannot have an empty driver table. If the driver table is empty, selection is not limited. Note that if the WHERE clause contains conditions that do not refer to the internal driver table, these conditions are not evaluated either.

Duplicate table entries must be deleted from the internal driver table before the SELECT FOR ALL ENTRIES is executed. If they are not deleted, identical data is read unnecessarily from the database.

If SELECT FOR ALL ENTRIES is used, SQL statements are divided into several SELECT statements by the database interface. The individual SELECT statements are processed on the database and the results are buffered in the database interface. The database interface eliminates duplicate data records from the resulting set, and transfers the overall results to ABAP.

Each SELECT statement processed on the database is executed as a concatenation of individual accesses (for example, index range scans or index unique scans). Concatenation is performed over the entries in the internal driver table. Since SELECT FOR ALL ENTRIES can become very complex, you should avoid combining them with SELECT-OPTIONS or RANGES tables.

Exactly how the WHERE clauses of individual SELECT statements are executed by the database depends on the type of database. The link is generated, for example, using an OR operation as IN list.

USING RANGE:

You can use RANGES tables in ABAP. You can define RANGES tables with the ABAP statements SELECT-OPTIONS and RANGES. These two statements implicitly define an internal table with the fields SIGN, OPTION , LOW and HIGH (see R/3 documentation). The end user at the selection screen dynamically fills a RANGES table defined with SELECT OPTION. The program determines how the RANGES table is filled.

The database interface converts the rows in a RANGES table so that they can be read by the DBMS, and links them with OR. The SQL statement produced is then transferred to the database.

The RANGES table can contain rows with complex expressions (for example, BT => BETWEEN, CP => LIKE). The SQL statement that results from a RANGES table can therefore be complex and difficult for the DBMS to process.

If the program determines how RANGES tables are filled, you must limit the number of entries. If the RANGES table is too large, a complex database statement is generated when the OR or IN list is evaluated. This statement may take a long time to be evaluated on the database. The maximum possible length of the database statement depends on the database (for example, with ORACLE 32 bit version, the maximum is 32 KB). If the maximum length is exceeded, an ABAP dump may be generated

USING AGGREGATE FUNCTIONS:

You can use aggregate functions (COUNT, SUM, MAX, MIN, AVG) to perform calculations using the DBMS. Only use aggregate functions if they significantly reduce the quantity of data records to be transferred.


Aggregate functions are usually used used with a GROUP BY clause. SQL statements that contain aggregate functions bypass R/3 table buffers. (Note that this can be a disadvantage if tables are buffered.)

Field values are processed differently, depending on whether they are in ABAP or at database level. The database uses a different method than ABAP for rounding data (for example, when converting F to I). The database recognizes the value NULL. ABAP does not recognize the value NULL.

For the aggregate function AVG, use data type F (number with floating point) for the target field. For the aggregate function SUM, make the data type of the target field longer than the data type of the source field, to allow for overflow.

Using data base hints in abap reports

If you want to force the database optimizer to choose a particular execution path, use database hints. In R/3 Releases before 4.5A, database hints can only be used in the respective database-specific SQL dialects (native SQL in the parentheses EXEC SQL. … ENDEXEC.).

As of R/3 Release 4.5A, you can use database hints directly from OPEN SQL. To do this, add the addition %_HINTS DBMSDBHINT’ to the end of the SQL statement. Under DBMS, enter the applicable database system (for example, ORACLE, INFORMIX). Under DBHINT, enter the relevant database-specific hint.

If the hint cannot be interpreted by the database system, it is treated as a comment and does not affect the execution of database statements.

A database hint should be incorporated into an SQL statement only as a last option (regardless of whether it is formulated in native SQL or OPEN SQL). Database hints are database-specific. If you replace the DBMS, you must change all the SQL statements that have database hints (and document your changes).

Database hints ignore the database optimizer's strategy of dynamically making selections according to table and index growth and the selectivity of fields. Therefore, an index that is good today could be bad tomorrow.

Reducing the records to be transfered from SAP Dictionary

To reduce the number of data records to be transferred, for each SQL statement you must specify a WHERE clause that is as selective as possible. A SELECT statement without a WHERE condition is an indication of a design error in the program. You must ensure that the number of selected records remains constant with time, especially when selecting on transaction tables (for example, BKPF, BSEG, COBK, COEP, LIPK, MKPF, VBAK, VBAP).


To reduce the amount of data to be transferred, do not formulate conditions for selecting data using a CHECK statement within a SELECT … ENDSELECT. Instead, formulate conditions as part of a WHERE clause.

Always specify all known conditions in the WHERE clause. Without one, the DBMS cannot use an index to optimize a statement.

To make reusing SQL statements possible, and thus to optimally utilize the DB SQL cache, always adhere to the order of fields specified in the ABAP Dictionary when formulating SELECT clauses and WHERE clauses.


Reducing the columns to be transfered in a program

If you use SELECT *, it is often the case that more data than necessary is transferred from the database server to the application server. If you require the contents of only a few columns, always list the table fields individually in a field list. If the number of fields can be reduced by more than half, creating a field list makes sense.

In this context, the variant SELECT .. INTO TABLE itab PACKAGE SIZE n ENDSELECT is important. PACKAGE SIZE n means that the data is transferred to the internal table itab in packages of n records.

If you need different projections on a database table at different places in a program, read all the required fields at once from the database, buffer them in an internal table, and then distribute the data within the program.

59

related post

sap index design in data base

COMMUNICATION PART ONE SAP ABAP

COMMUNICATION PART TWOSAP ABAP

COMMUNICATION PART THREESAP ABAP

COMMUNICATION PART FOURSAP ABAP

COMMUNICATION PART FIVESAP ABAP

Index design in data base for SAP

Changing the index design of a table in the R/3 System can affect SQL statements other than the one you want to optimize. You must therefore ensure that other SQL statements are not negatively influenced by a new, changed or deleted index.

To do this, you must first establish which database views are referenced on the table. You can find this information in the where-used list in the ABAP Dictionary.

Next, perform an object-related shared SQL area analysis. This means that you restrict the analysis to the relevant tables or database views, rather than finding inefficient SQL statements in the whole system.

After that, change the index design (also activate the indexes, and update the table and index statistics). Immediately after you have changed the index design, reset the DB SQL cache (use the Reset pushbutton).

Repeat the object-related DB SQL cache analysis approximately two days after changing the index design. Because you reset the DB SQL cache, only information is displayed for SQL statements executed after the index design was changed.

To establish in which database views the tables to be analyzed are used, go to the initial screen of the ABAP Dictionary. To do this, from the SAP standard menu choose Tools >> ABAP Workbench >> Dictionary, or use Transaction SE12. Enter the table name in the appropriate field, and choose Where-used list.

In the dialog box that appears, select the checkbox Views and confirm your selection. A list appears, showing all the database views that use the table. You must perform the object-related DB SQL cache analysis for the database views and for the table itself.

To perform an object-related DB SQL cache analysis, first go to the initial screen of the database monitor. To do this, from the SAP standard menu choose Tools >> Administration >> Monitor >> Performance >> Database >> Activity, or use Transaction ST04.

Choose Detail analysis menu >> SQL request. A dialog box appears, where you can limit the SQL statements displayed. Delete the entries in the fields Buffer Gets and Disk Reads. In the Table field, replace the * with the name of the table to be analyzed.

Download to a local file the list of SQL statements contained in the DB SQL cache for the table. Save the Explain SQL display for each SQL statement. Perform the analysis in the same way for all database views that use the table.

A few days after you changed the index design, perform the same analysis again, and for each SQL statement compare the values for logical and physical accesses per execution. This second analysis enables you to ensure that none of the SQL statements analyzed is negatively influenced by the change in index design. The change should affect only the SQL statement to be optimized.