Showing posts with label Issues. Show all posts
Showing posts with label Issues. Show all posts

Friday, 14 September 2012

Delta Errors

Error 1: Delta invalidation (Data mart scenario):

To solve the problem you need to reinitialize the load again (SAP recommends)
1) Delete the requests from cube which had as source the ODS. (Otherwise you will get duplicate data.)
2) Delete the initialization in the info package for the ODS-info source
3) Schedule a new init from ODS to info cube.

Workaround ( Customer's responsibility; SAP wont support incase of issues in this procedure).
You should ensure:

- The requests that really have not been loaded to the target cube are still available in the PSA for the loads to the ODS and
- You are deleting only the requests which are not loaded already.

Solution:

1. Delete ONLY the request from the source ODS which have NOT been loaded to the target cube already. If the requests which you need to delete are available in the PSA for the load to DSO then these can be reloaded to the target in a later step (see below step 4). If all requests are already loaded to the target and none are missing then you don't need to do anything in this step, proceed
2. You now need to delete the existing init request for the data mart load. To do this in the Info Package (just take one if there're several) reset the delta management in menu 'Scheduler', 'Init options for source system'.
3. Now execute an init load without data transfer, in the Info Package set the extraction mode to "Initialize without Data Transfer" and schedule the info package. This will fill RSDMDELTA with ALL requests contained in the source ODS's change log at that time. In other words this process resets the DataMart flag in the source ODS for the requests that have already been loaded.
4. Now in this step reload the requests from the PSA to the source ODS that you deleted in step 1 (if you deleted any requests).
5. The next DataMart delta from the source ODS should load these requests into the targets.

Error 2: Duplicate data in case of repeat delta:

If a delta load fails, it does NOT mean, that a repeat delta has to be executed automatically! It is necessary to analyse, if the data has already extracted correctly, and the corresponding tables (ROOSPRMSC) updated or not, before you execute a repeat delta.

Error 3: DataMart repeat request returns 0 records

After an incorrect delta, repeat delta does not fetch delta record. Reset the Data Mart status of the source request corresponding to the deleted Data Mart request in the administration view of the sourceobject.

Error 4: There are already full uploads in the ODS object and therefore inits or deltas cannot be posted or activated.

You have already loaded full requests in to the ODS. Due to business requirement, you want to change this to delta. If a full upload occurs once for an ODS object, you can no longer activate an init for the same Data Source/source system combination in this ODS object. The system does not check for overlapping selection conditions. Please execute report RSSM_SET_REPAIR_FULL_FLAG which will convert all Full loads in to repair full request.

Error 5: Unable to do compression though the delta seems to be updated in data target

When a Delta-DTP is started, pointer DMEXIST is increased to prevent the deletion of the source-request. As long as there is still a Delta-DTP or Export-Data Source that has NOT yet picked up the request, DMALL is not updated and the request cannot be compressed. When ALL Delta-DTPs AND Export-Data Sources have picked up the source-request, the pointer DMALL is increased.

So, if you are unable to do compression due to delta errors, please check that all downstream targets delta is successful. Even if one downstream target is not filled with delta data, compression is not allowed.

For Info Cubes, the contents of the RSMDATASTATE-DMEXIST and RSMDATASTATE-DMALL fields must always be equal to the largest request in the RSDMDELTA table of the relevant cube (ICNAME field).

Then only compression is possible.

In General,in table RSMDATASTATE:
DMALL <= DMEXIST <= QUALOK.

Tables to be checked in case of 3.x Flow: RSDMDELTA/RSDMSTAT
Tables to be checked in case of 7. X dataflow: RSBKREQUEST and RSSTATMANREQMAP.

For ODS objects, the contents of RSBODSLOGSTATE-PROCESSED_ALL and RSBODSLOGSTATE-PROCESSED_ONE fields must always be equal to the largest request in the RSDMDELTA table. In General,in table RSBODSLOGSTATE:
PROCESSEDALL<= PROCESSEDONE <= ACTIVE.

Error 6: Last delta incorrect. A repeat must be requested. This is not possible.

For few 0FI* extractors if the delta has failed and if You want to repeat delta which is not possible. In the standard system, you can repeat the delta extractions for the last sixty days. The number of days is defined by the 'DELTIMEST' parameter in the BWOM_SETTINGS table. Still, if you want to do repeat delta you need to change the time stamp. Please refer note 860500 for more details.

Error 7: Even after init simulation further deltas has not fetched relevant data

After initialization without data transfer delta has not fetched data. You can check if the data source is designed to perform Init simulations by checking the field ROOSOURCE-INITSIMU for the respective data source. If it's not enabled, if there is delta invalidation, only option is to do the reinit with data transfer.

Error 8: Missing delta records - in generic delta with calendar day.

Using "Calendar day" as delta-relevant field has certain disadvantages compared to the use of time stamps. If there is an adequate time stamp field in the source structure, use this field as delta-relevant field.

Error 9: Dump while execution of delta info package / opening an info package.

"MESSAGE_TYPE_X" "SAPLRSSM" or "LRSSMU36" "RSSM_OLTPSOURCE_SELECTIONS"
Reasons: Improper system copy / Terminated initialization / Overlapping initialization selection.

This dump happens due to inconsistent entries in table RSSDLINIT: (BW system) & ROOSPRMSC, ROOSPRMSF (OLTP system).

Error 10: Delta is lost after a system copy:

You have copied a source system and BW system is not copied. For delta to work from new copied source system, Repair the delta queue In the BW system: Execute report RSSM_OLTP_INIT_DELTA_UPDATE for each data source, which has an active init request. Provide the field 'ALWAYS' with value 'X'. This will push the init / delta information of the BW into the source system, thus allows to continue loading delta or to do a new init.

This report will update the delta tables in the source and from technical point of few delta upload will be possible, but you should be aware that in case of system copy (if you copy only one system not both) you cannot compare the data between source and BW system, a new init is suggested.

Error 11: How to load delta from failed delta request:

Delta is having missing records. It's not the latest delta as we don't know when the problem has occurred.

Error 12: CO-PA Data source: No new data since last delta update (Though the delta data exists!)

First delta has failed. And the repeat delta has fetched 0 records, though there are records in source. It is just normal that the first delta after a failed one finishes successful, but extracted zero records. The reason is that this is a repeat from the delta queue and as the delta run before did not extract anything due to the status of the data source in KEB2 the delta queue was empty and so the repeat just extracts the empty delta queue. After such a successful repeat you should just execute another delta run that should extract the desired data.

Error 13: Write optimized DSO has not fetched Delta

You need to run the report RSSM_WO_DSO_REPAIR_RSBODSLOG, which generates these missing entries.

Questions:

1. What are the possible reasons for delta invalidation?

There can be multiple reasons with which delta management can be invalidated like if
1) Info Cube: a request is compressed which has not been delta-updated to all downstream targets.
2) Request is deleted, which already had been delta-updated to one or more downstream targets.

How to find the delta invalidation:

To check if the delta management is invalidated, look up table RSDMSTAT-DMDELTAINA. The field DMDELTAINA (delta status) should be set to X. Also the table RSDMDELTA will not contain any data for this source/target combination.

2. Is it good to go for repair full request / reinit incase of missing delta:

It depends on parameters like: Volume of missing delta, possibility for selective deletion and the type of Downstream Data target.
Note that you should only use repair full request if you have a small number of incorrect or missing records. Otherwise, SAP always recommends a re-initialization (possibly after a previous selective deletion, followed by a restriction of the Delta-Init selection to exclude areas that were not changed in the meantime).
Please follow note 739863 for further details.

3. Is it possible to do initalization with out data transfer in DTP?

Unlike delta transfer using an InfoPackage, an explicit initialization of the delta process is not necessary for delta transfer with a DTP. When the data transfer process is executed in delta mode for the first time, all existing requests are retrieved from the source and the delta status is initialized. This implies that to re-init all existing requests must be deleted.

If you want to execute a delta without transferring data, analogous to the simulation of the delta initialization with the InfoPackage, select No data transfer; delta status in source: fetched as processing mode. This processing mode is available when the data transfer process extracts in delta mode. A request started like this marks the data that is found in the source as fetched, without actually transferring it to the target.

If you dont need delta DTP anymore, you can execute report: RSBK_LOGICAL_DELETE_DTP for the delta DTP that you want to be deactivated. Afterwards, the delta DTP will no longer be able to be executed and requests that have already been transferred will no longer be taken into account in the calculation of the data mart water level.
For the data mart, the system behaves as if the delta DTP never existed.
Please refer note 1450242 for further details.

4. Important tables in Delta / Data mart Scenarios:

RSSDLINIT: (BW system): Last Valid Initializations to an OLTP Source
ROOSPRMSC (OLTP system): Control Parameter Per DataSource Channel
ROOSPRMSF : Control Parameters Per DataSource
RSSDLINITSEL: Last Valid Initializations to an OLTP Source
RSDMDELTA: Data Mart Delta Management
This table contains the list of all the requests which are already fetched by the target system.
ROOSGENDLM: Generic Delta Management for DataSources (Client-Dep.)
RSBKREQUEST: DTP Request
RSSTATMANREQMAP: Mapping of Higher- to Lower-Level Status Manager Request
RSDMSTAT: Data mart Status Table
This table is used for processing a Repeat Delta. Using field DMCOUNTER we can follow up in RSDMDELTA.
RSMDATASTATE: Status of the data in the Infocubes
This table is used to know the state of the Data Target regarding compression, aggregation etc. For Data Marts the two field DMEXIST and DMALL are used.
RODELTAM: BW Delta Process
ROOSGEN: Generated Objects for OLTP Source
RSSELDONE: Monitor: Selections for executed request
RSSTATMANPART: Store for G_T_PART of Request Display in DTA Administration
RSBODSLOGSTATE: Change log Status for ODS Object
ROOSOURCE: Table Header for SAP BW OLTP Sources
Mainly used to get data regarding Initialization of the Extractor. It contains the header metadata of Datasources, e.g. if the datasource is capable of delta extraction.

5. Important corrections in BC-BW area:

1460643 Performance problems with asynchronous RFC
1250813 SAPLARFC uses all dialog work processes
1377863 Max no of gateways exceeded
1280898 IDoc hangs when it is to be sent immediately with
1055679 IDoc-tRFC inbound with immediate processing may not
1051445 qRFC scheduler does not use all available resourcen
1032638 Myself extraction: Cursor disappears, IDocs in
995057 Multiple execution of tRFC
977283 Outbound scheduler remains in WAITING status for a long time

Wednesday, 12 September 2012

Time stamp error

Reason

The “Time Stamp” Error occurs when the Transfer Rules or Transfer Structure are internally inactive in the system.
They can also occur whenever the Data Sources are changed on the R/3 side or the DataMarts are changed in BW side. In that case, the Transfer Rules is showing active status when checked. But they are actually not, it happens because the time stamp between the Data Source and theTransfer Rules are different.


Solution 

1. Go to RSA1 --> Source system --> Replicate DataSource


2. Run the program RS_TRANSTRU_ACTIVATE_ALL 


3. Mention Source System and Info Source and then execute.


Now the Transfer Structure will be automatically activated then proceed with the reload, it will get success now.

Error calling number range object


Solution

1. Note down the InfoCube and Dimension name.
2. Go to T-Code: RSRV --> All Elementary Tests --> Transactional Data then double click on “Comparison of Number Range of a Dimension and Maximum DIMID” --> then click the same on the right side pane --> Mention the InfoCube name and Dimension name, click on Transfer button --> Click on top Correct Error.

Tuesday, 11 September 2012

Monitoring Issues: How to solve InfoPackage failed with tRFC connection error

In this article we will learn how to solve if a InfoPackage fails with tRFC issue in Production Monitoring. This is most common issue in Monitoring.

What is RFC ?
Remote Function Call (RFC) is the standard SAP interface for communication between SAP system. In SAP, communication between applications in SAP environment in different systems includes connections between SAP systems and also between SAP systems and non-SAP systems.
There are two types of RFC connections.
Synchronous RFC
Asynchronous RFC (now called as Transactional RFC i.e, tRFC)

Synchronous RFC:
This type of RFC executes the function call based on synchronous communication i.e, the systems involved in the communication must both be available at the time the call is made.

tRFC(Asynchronous RFC):
This is an asynchronous communication method that executes the called function module just once in the RFC server. The remote system need not be available at the time when the RFC client program is executing a tRFC. The tRFC component stores the called RFC function, together with the corresponding data, in the SAP database under a unique ID called as transaction ID (TID).

If a call is sent, and the target system is down, the call waits in the local queue. The calling dialog program can proceed without waiting to see whether the remote call was successful. If the receiving system does not become active within a certain time, then the call is scheduled to run in batch.

How to find the issue?
When the data is transferred from Source system to BW, they are sent in the form of packets. Sometimes, the InfoPackage may stuck at certain packets. When ever we find any InfoPackage taking long time than usual, then in this case we can expect tRFC issue.

Also we can check if a packet has tRFC stuck or not by selecting a particular packet under “Transfer (IDocs and TRFC):“ option in details tab of process monitor screen of a InfoPackage. In this case we will get a tRFC stuck button under the details tab as shown below.

How to solve the issue?
Before we rectify the issue, we need to perform certain checking s. Suppose if an InfoPackage is stuck for a while and is in yellow state. Usually it doesn’t take this much amount of time for nearly same amount of data as in previous logs,then check weather the extraction is completed or not. This can be done by checking the Details tab in Process Monitor screen of InfoPackage. If One sees the message” Data Selection Ended”, it means that Extraction job is finished or we can also check the extraction job in the source system by going to tool bar ‘environment --> job overview à in source system .

Now go to extraction job’s job log in source system and search for “SYSFAIL” in the job log. We will find this adjacent to the Transaction ID(TID) of particular packet in the job log. All the packets that failed will have their ARFCSTATE as SYSFAIL as shown below. Note down the TID’s of all such packets.


Then go to tcode SM58 in Source System of that InfoPackage. Then you will find the screen as below.


Before Executing, keep the User name as “*” , give required date and also give the target system name in TRFC Destination filed and execute. Then we will find the list of tRFC’s, then search for required TID as shown below. All the failed tRFC’s have status as Transaction recorded as shown below.


In the same way, search for all those failed tRFC’s as per the noted TID’s. Execute them by going to menu options --> edit --> Execute LUW(F6).


Once all the failed tRFC’s are cleared from the queue, refresh the InfoPackage and it will get successful. It may not get successful if there is delay in clearing the stuck tRFC’s in the Source System, in this case we need either to push each unprocessed packet manually or repeat this InfoPackage and clear the tRFC’s in Source System regularly.