Monday, 13 May 2013

Full/Delta/Initialize delta update methods

Introduction

Update method is used to get the updated data coming from the source system to BI system at Info package level. We can set update methods in the update tab of the info package.

The update methods in the info package are:

1. Full Update
2. Delta Update
3. Initialize Delta Process
    (I) Initialize with data transfer
    (II) Initialize without data transfer
    (III) Early Delta Initialization

1.Full Update

Full update extracts the full data from source system to PSA in BI7 every time. 

2. Delta Update

Delta update extracts delta records only from the BW delta queue in the source system to BI system.

We must initialize the delta in order to get delta records, otherwise it is not possible to load delta records.

The following are the 4 delta types for the data source in the system.

F: Flat file provides the delta
E: Extractor determines the delta, Ex: LIS, COPA
D: Application determines the delta, Ex: LO, FI-AR/AP
A: Use ALE change log delta

Note: We can know the delta properties from ROOSOURCE table in the source system with SE16 transaction code.

3.Initialize Delta Process

To get the delta records, one must initialize the delta process. While initializing the delta process, the system will generate a flag: Initialize option for the source system in (scheduler menu of info package) BI and BW delta queue per the data source in the source system (RSA7). This enables the time stamp mechanism.

Initialize with data transfer

If you select this option, It extracts the init data from source system to BI system and allows delta functionality.

Steps for initialize with data transfer

Lock the users in the source system 
Delete the contents of the setup tables for the specific application component in source system(T code: LBWG). 
Fill the setup tables (SBIW or OLI*BW, use 1,2,3...in place of * according to the application ). 
Run the info package with initialize with data transfer. 
unlock the users in the system 

Note: This is very time consuming process, because we need to lock the users until data reaches to the BI system.This effects the client business.

Initialize without data transfer

In some cases, init is successful but someone has deleted the init flag. In order to set the flag again to perform the delta load without disturbing the data, we execute IP with this option.

Steps for initialize without data transfer

Lock the users in the source system 
Delete the setup tables content for the specific application component. 
Fill the setup tables 
Run the IP with the option: Initialize without data transfer. 
Unlock the users in the source system 
Load data to BI system using repair full request info package 

Note: In this method, after data is loaded to setup tables we can unlock the users in source system. this is better option than initialize with data transfer option.

Early Delta Initialization

In this option, we can do the delta initialization before filling the setup tables.So that users can post the documents when we are filling the setup tables.We will get the posted records in the next delta run

Steps for early delta initialization

Run the Info package with early delta initialization option.This will enable the BW delta queue and setup the time stamp for delta in the source system. 
Delete the setup tables for the application component 
Fill the setup tables 
Load the setup table data using repair full request (scheduler menu option of info package) info package 

How to check whether the data source supports early delta initialization or not?

Go to SE16 in ECC, give table name: ROOSOURCE and enter 
In the next screen give data source like 2lis_02_sdr(purchase document header data source) name and enter 
if field ZDD_ABLE has a value 'X', then the data source supports early delta initialization 
If the filed has space, then the data source does not support early delta initialization.

Friday, 10 May 2013

Difference between V1, V2, V3 updates

V1 Synchronous update - Used for LIVE tables, after the records get updated system will send an acknowledgement. This degrades processing performance.

V2 Asynchronous update – LUW's will be Updated in front end without any acknowledgement. 

V3 Asynchronous with background updating – Updating happens in Background.

We use this concept in LO Extraction update modes.

Direct delta update mode uses V2 update. Recommended only when less number of records are there.

Queued delta is recommended for most of the cases and it will use V3 update. LUW’s will be updated to Extractor queue (LBWQ) in the form of tokens (pointers). We need to run a background job (V3 collective run) to push the data to delta queue from LIVE tables. All the records will be moved at once so no problem with sequence.

Unserialized V3 update also uses V3 update, only recommended when sequence of data is not important. DSO is not recommended as target when we use this update. 

Friday, 3 May 2013

Common Issues with Open Hub

1. Error message: 'Destination not supported' (Error Code RSBO 102)


when creating an open hub destination

Firstly the corrupt Open Hub destinations need to be repaired. To do this you need to check the table RSBOHDEST, for all Open Hub Destination entries where the field DESTYPE is initial/empty or is equal to 'FILE', you must change the DESTYPE for such entries to TAB in the table RSBOHDEST. TAB must be selected because it is not possible to select another valid DESTYPE in the table. When you no longer have the error message 'Destination type is not supported' you can change the destination type to what it should be in the maintenance for the Open Hub Destination in RSA1.

For the users that worked with Open Hub and have the problem that they cannot access RSA1 (not supported (RSBO102) error message), there should be an entry(for each user) in the personalisation table RSAWBN_USR_TREE with the field AWBUSER = to the name of the user ID that has the problem and the field AWBTREE = DEST. You need to delete the entries returned for this selection on the fields AWBUSER and AWBTREE only for the User ID's that have the problem from the table RSAWBN_USR_TREE and the problem should be resolved.

2. Errors when sending data to a 3rd party Open Hub Destination 

Message no. RSBO523 SYSTEM_FAILURE with function RSB_API_OHS_3RDPARTY_NOTIFY and target system x 
Message no. RSBO899 Bean RSB_API_OHS_3RDPARTY_NOTIFY not found on host x, ProgId=x: Object not found in lookup of x 
Message no. RSBK241 Error while updating to target x (type Open Hub Destination)
When you execute the DTP for OHD, the system stores the data in a DB table 

The 3rd party tool is notified with the Function Module RSB_API_OHS_3RDPARTY_NOTIFY, this function has to be implemented by the 3rd party tool, not in the BW system

To resolve this issue;

Check if the function exists on the 3rd party tool using "Extras" --> "function list" after you select the destination in SM59

Implement the necessary function on the 3rd party tool

If you require assistance to implement the function on the 3rd party system, please contact the relevant 3rd party support desk.

Some useful T-Code for BI performance tuning

T-CodeDescription
SM66Global Work Process Overview
ST02Tune Summary
ST06System monitor
STADSAP workload
ST05SQL Trace
SE30ABAP Trace
ST12Single transaction analysis(including ST05/ST30)
RSMOBW Load monitor
DB02DB Load overview
ST04DB Performance snapshot
RSBATCHBI Background management
RSODSO_SETTINGSMaintenance of runtime parameter of DSO
RSRVAnalysis and repair BI Objects
ST03Workload in system
RSRTQuery Monitor
RSRTRACEConfigure Trace Tool


Friday, 26 April 2013

Disadvantages of AGGREGATES, COMPRESSION, IC PARTIONING, INDEXES, LINE ITEM DIMENSIONS

Disadvantages of 

(1) AGGREGATES

Even though Aggregates are used for performance, but it will decrease the same when you create more aggregates. 
Until Rollup takes place the query won't hit the aggregate Cube
Aggregates - its main disadvantage is that it stores data physically as a redundant, more aggregates will cause waste of memory.

(2) COMPRESSION

Once Cube is compressed, then all the request number will be removed and hence deletion by request id is no more possible.
Compression- compressed request cannot be got back to normal, deletion is difficult

(3) Partition

Partition Handling for several thousand partitions is usually impacting DB performance.
IC partition  - partition cannot be done after data loaded (3.X) but repartition is possible in BI 7.0

(4) INDEXES

If you don't drop the index before loading then the data load will be slow
If you don't create the index before the reporting then the reporting will be slow
Index- for large volume of data create and delete index consume lot of time

(5) LINE ITEM DIMENSIONS

This can be set when you have one only characteristic in the Dim table.
Line item - more number of line item cannot be used as number char used will be reduced.

What is reconciliation?

Reconciliation is nothing but the comparison of the values between BW target data with the Source system data like R/3.

In general this process is taken at 3 places one is comparing the info provider data with R/3 data, Compare the Query display data with R/3 or DSO data and Checking the data available in info provider kefigure with PSA key figure values.

Archiving the data in SAP BI

Archiving is used to store your data at a remote location to improve the performance in BI.

Archiving is a process of moving the data from the sap database to storage system which is not required online and archived data can be read offline when ever user required. 

Archiving helps to increase more database size, improvement of the system performance will take care to a greater extant and cost effectiveness for the client with respect to hardware. 

We use archiving process in various SAP application areas. We can archive Master data and Transactional datas. 

Master data such as Customer master data, Vendor master data, Material master data, Batch master data and so on... 

Transaction data such as Sales order, Delivery document, shipment document, Billing document, Purchase requisition, Purchase order, Production Order, Transfer order, Account receivables, Account payables, and so on

Steps should be followed for Archiving in 7.0:

1. Go to transaction -: RSDAP
2. Give info provider name & type- create.
3: Go to General Settings - Give archiving object name.
4. Selection Profile tab- schedule time.
5. ADH (tab)- Specify logical file name.
6. Activate - make sure you copy your archiving object name.
7. go to t-code : SARA give your object name.& click "write request"
8. now create a variable & click on variable & click on MAINTAIN.
9. select a field Then- CONTINUE
10. go to "FURTHER RESTRICTION(tab)"give a value
11. go to to processing options: click on production mode.
12. save 
13. Attribute (tab): give a name & save
14. give start date & print parameters. 
15. Execute.

Usage of compound attribute in reporting

Compounding attribute lets you derive a unique data records in the reporting. 

Suppose you have a cost center and cost accounts like this and you want to maintain proper relation:

Cost centers: 1000, 1001, 1002

Cost accounts: 9001,9001,9003

The cost accounts are not unique across cost centers and the master data will be over written.

So the cost accounts across cost centers cannot be differentiated. 

When you add the cost centers ac compounding attribute a unique record will be present. After compounding the records will look unique like below in reporting:

9001/1000
9002/1000
9003/1000
9001/1001
9002/1001
9003/1001
9001/1002
9002/1002
9003/1002

Thus differentiating each cost account across cost centers uniquely.

Compounding objects and its purpose

A compound attribute differentiates a characteristic to make the characteristic uniquely identifiable. 

In the Compounding tab page, you determine whether you want to compound the characteristic to other InfoObjects. You sometimes need to compound InfoObjects in order to map the data model. Some InfoObjects cannot be defined uniquely without compounding. 

For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique. One particular option with compounding is the possibility of compounding characteristics to the source system ID. You can do this by setting the Master data is valid locally for the source system indicator. You may need to do this if there are identical characteristic values for the same characteristic in different source systems, but these values indicate different objects. 

Recommendation : Using compounded InfoObjects extensively, particularly if you include a lot of InfoObjects in compounding, can influence performance. Do not try to display hierarchical links through compounding. Use hierarchies instead. 

Note : A maximum of 13 characteristics can be compounded for an InfoObject. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself. Reference InfoObjects If an InfoObject has a reference InfoObject, it has its technical properties:

· For characteristics these are the data type and length as well as the master data (attributes, texts and hierarchies). The characteristic itself also has the operational semantics. 
· For key figures these are the key figure type, data type and the definition of the currency and unit of measure. The referencing key figure can have another aggregation. 

These properties can only be maintained with the reference InfoObject. Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data. The operational semantics, that is the properties such as description, display, text selection, relevance to authorization, person responsible, constant, and attribute exclusively, are also maintained with characteristics that are based on one reference characteristic. 

Example : The characteristic Sold-to Party is based on the reference characteristic Customer and, therefore, has the same values, attributes, and texts. More than one characteristic can have the same reference characteristic: The characteristics Sending Cost Center and Receiving Cost Center both have the reference characteristic Cost Center. Assign Points if useful 

Example : Typically in a organization the employee id are allocated in serial like say 101, 102 and so on. Lets your Organization comes out with a new employee id scheme where the employee id for each location would start with 101. So the employee id starting for India would be India 101 and for UK would be UK/101. Now note that the employee India 101 and US/101 are different. Now if someone has to contact employee 101 he needs to know the location without which he cannot uniquely identify the employee. Hence in this case location is the compounding attribute.

Importance of semantic groups in DTP

Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package. 

This setting is only relevant for Data Store objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.

A very simple example:

Lets say there are two records in the input stream of data for a DTP.

Product   Material 

P1           M1 XYZ 
P1           M1 PQR 

If the data gets divided into multiple packets while getting processed, the above two records might go into separate data packets. In case you have defined semantic group in DTP with product and material, system will always put these two records together. This is sometimes required when it is needed to process all such records together, for eg., in start routine.

Semantic Groups are to specify how you want to build the data packages that are read from the source. To do this, you determine key fields. Data records that have the same key are combined in one data package. Currently, only data that has been read from the PSA can be processed further in semantic groups.

This setting also determines the key fields for the error stack. 

In bw3.x infopackage directly transfers the data to targets but in BI 7 infopackage will only bring data to PSA, Data Transformation Processing will transfer the data from PSA tothe target. PSA is only staging area this will not be a target (we can’t do anything here)

Difference between DTP and IP in BI 7.0

The key difference between 3.5 and 7.0 is that in 3.5 an infopackage would load from a single DS to multiple data targets (infocubes, ODS etc) so if the source was sending delta, then the load to targets has to be done as a single data load to all,meaning a delta has to get loaded to all targets at the same time. you could not load them at different timings.

Whereas in 7.0, DTP helps in maintaining delta queues from PSA to different targets enabling you to load the delta to each target, independent of the other because each target will have its own delta Queue maintained. This is a big change in 7.0 and helps in load distribution mechanisms.

Error handling mechanisms in BI 7.0 and BW 3.5

Error stock DTP

Incase of 7.0 Version 

1) DTP: In DTP, you find the Update tab , where you will find the different error handling options.
a. NO Update, No Reporting.
b. Update Valid Records only, Reporting not possible (Request red).
c. Update Valid Records only, Reporting possible (Request green).

Incase of 3.x version 

2) In info package : You find the update tab , under Data update types in the data targets, you find the error handling push button , when you click this you will find all the options related error handling

Rule types in transformations

1.Constant 
2.Direct Assignment 
3.Formula 
4.Read Master data 
5.No Transformation 
6.Routine

behavior of transfer routine at characteristic level in transformations

When you create a transfer routine, it is valid globally for the characteristic and is included in all the transformation rules that contain the InfoObject. However, the transfer routine is only run in one transformation with a DataSource as a source. The transfer routine is used to correct data before it is updated in the characteristic. 

During data transfer, the logic stored in the individual transformation rule is executed first. Then the transfer routine for the value of the corresponding field is executed for each InfoObject that has a transfer routine. 

In this way, the transfer routine can store InfoObject-dependent coding that only needs to be maintained once, but that is valid for all transformation rules.

What is the impact on existing routines if we create expert routine?

Automatically existing routines will be deleted or deactive. 

We have following types of routines in BI 7. 

Start Routine:

The start routine is run for each data package at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package.

Routine for Key Figures or Characteristics:

This routine is available as a rule type; you can define the routine as a transformation rule for a key figure or a characteristic. The input and output values depend on the selected field in the transformation rule. 

End Routine:

An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to post process data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.

Expert Routine:

This type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine. 

Difference among start routines, end routines and expert routines

Start Routine: Start routine runs before the transformation rules. It manipulates the source data package. The source data package is in the structure of data source. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package. Generally used for Filtering records.

End Routine: End routine runs after the transformation rules. It manipulates the target data package. The result package is in the structure of the target object. It is a routine with a table in the target structure format as input and output parameters. You can use an end routine to post process data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks. 


Expert Routine: This will trigger without any transformation Rule. Whenever we try to write a expert routine, all existing rules are deleted. This is used generally for customizing rules. It is helpful if complex or better performing transformations are needed. 

When we use write optimized DSO?


Write optimized DSO used to pull large volume of data 

a. Used where fast loads are essential. Example: multiple loads per day (or) short source system access times (world wide system landscapes). 
i) If the Data Source is not delta enabled. In this case, you would want to     have a Write-Optimized DataStore to be the first stage in BI and then pull the Delta request to a cube. 
ii) Write-optimized DataStore object is used as a temporary storage area for large sets of data when executing complex transformations for this data before it is written to the DataStore object. Subsequently, the data can be updated to further InfoProviders. You only have to create the complex transformations once for all incoming data. 
b. Write-optimized DataStore objects can be the staging layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders. 
c. If you want to retain history at request level. In this case you may not need to have PSA archive; instead you can use Write-Optimized DataStore. 
d. If a multi dimensional analysis is not required and you want to have operational reports, you might want to use Write Optimized DataStore first, and then feed data into Standard Datastore. 
e. Probably you can use it for preliminary landing space for your incoming data from diffrent sources. 
f. If you want to report daily refresh data with out activation.In this case it can be used in reporting layer with InfoSet (or) MultiProvider. 

Functionality of Write-Optimized DataStore 

Only active data table (DSO key: request ID, Packet No, and Record No): 

o No change log table and no activation queue. 
o Size of the DataStore is maintainable. 
o Technical key is unique. 
o Every record has a new technical key, only inserts. 
o Data is stored at request level like PSA table. 

No SID generation: 

o Reporting is possible(but you need make sure performance is optimized ) 
o BEx Reporting is switched off. 
o Can be included in InfoSet or Multiprovider. 
o Performence improvement during dataload. 

Fully integrated in data flow: 

o Used as data source and data target 
o Export into info providers via request delta 

Uniqueness of Data: 

o Checkbox “Do not check Uniqueness of data”. 
o If this indicator is set, the active table of the DataStore object could contain 

several records with the same key. 

Allows parallel load. 

Can be included in Process chain with out activation step. 

Supports Archiving.

Difference between SAP BI 3.x, 7.0, 7.3

Major Differences between Sap Bw 3.5 & Sap BI 7.0 version:


1. In Infosets now you can include Infocubes as well.
2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.
3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accelerator is a separate box and would cost more. Vendors for these would be HP or IBM.
4. The monitoring has been improved with a new portal based cockpit. Which means you would need to have an EP guy in your project for implementing the portal !
5. Search functionality has improved!! You can search any object. Not like 3.5
6. Transformations are in and routines are passe! Yes, you can always revert to the old transactions too.
7. The Data Warehousing Workbench replaces the Administrator Workbench.8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects.
9. The transformation replaces the transfer and update rules.
10. New authorization objects have been added
11. Remodeling of InfoProviders supports you in Information Lifecycle Management.
12 The Data Source:
There is a new object concept for the Data Source.
Options for direct access to data have been enhanced.From BI, remote activation of Data Sources is possible in SAP source systems.
13.There are functional changes to the Persistent Staging Area (PSA).14.BI supports real-time data acquisition.
15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ Major differences include :
a) Renamed ODS as DataStore.
b) Inclusion of Write-optmized DataStore which does not have any change log and the requests do need any activation
c) Unification of Transfer and Update rules
d) Introduction of "end routine" and "Expert Routine"
e) Push of XML data into BI system (into PSA) without Service API or Delta Queue
f) Intoduction of BI accelerator that significantly improves the performance.
16. Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load.

New features in BI 7 compared to earlier versions:



i. New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA).
ii. Enhanced and Graphical transformation capabilities such as Drag and Relate options.
iii. One level of Transformation. This replaces the Transfer Rules and Update Rulesiv. Performance optimization includes new BI Accelerator feature.
v. User management (includes new concept for analysis authorizations) for more flexible BI end user authorizations.

ASAP Methodologies

ASAP stands for Accelerated SAP. Its purpose is to help design SAP implementation in the most efficient manner possible. Its goal is to effectively optimize time, people, quality and other resources, using a proven methodology to implementation. ASAP focuses on tools and training, wrapped up in a five-phase process oriented road map for guiding implementation.The road map is composed of five well-known consecutive phases:

Phase 1 Project Preparation
Phase 2 Business Blueprint
Phase 3 Realization
Phase 4 Final Preparation
Phase 5 Go-Live and supportIn today's post we will discuss the first phase.

Phase 1 : Project PreparationPhase


One initiates with a retrieval of information and resources. It is an important time to assemble the necessary components for the implementation. Some important milestones that need to be accomplished for phase 1 include

• Obtaining senior-level management/stakeholder support
• identifying clear project objectives
• architect an efficient decision-making process
• creating an environment suitable for change and re-engineering
• building a qualified and capable project team.

Senior level management support:
One of the most important milestones with phase 1 of ASAP is the full agreement and cooperation of the important company decision-makers - key stake holders and others. Their backing and support is crucial for a successful implementation.

Clear project objectives:
be concise in defining what your objectives and expectations are for this venture. Vague or unclear notions of what you hope to obtain with SAP will handicap the implementation process. Also make sure that your expectations are reasonable considering your company's resources. It is essential to have clearly defined ideas, goals and project plans devised before moving forward.

An efficient decision making process:
One obstacle that often stalls implementation is a poorly constructed decision-making process. Before embarking on this venture, individuals need to be clearly identified. Decide now who is responsible for different decisions along the way. From day one, the implementation decision makers and project leaders from each area must be aware of the onus placed on them to return good decisions quickly.

Environment suitable for change and re engineering: Your team must be willing to accept that, along with new SAP software, things are going to change, the business will change, and information technology enabling the business will change as well. By implementing SAP, you will essentially redesign your current practices to model more efficient or predefined best business practices as espoused by SAP. Resistance to this change will impede the progress of your implementation.

ASAP- Second Phase- Business Blueprint

SAP has defined a business blueprint phase to help extract pertinent information about your company that is necessary for implementation. These blueprints are in the form of questionnaires that are designed to probe for information that uncovers how your company does business. As such, they also serve to document the implementation. Each business blueprint document essentially outlines your future business processes and business requirements. The kinds of questions asked are germane to the particular business function, as seen in the following sample questions:

1) What information do you capture on a purchase order?

2) What information is required to complete a purchase order?


Accelerated SAP question and answer database: The question and answer database (QADB) is a simple although aging tool designed to facilitate the creation and maintenance of your business blueprint. This database stores the questions and the answers and serves as the heart of your blue print. Customers are provided with a customer input template for each application that collects the data. The question and answer format is standard across applications to facilitate easier use by the project team.

Issues database:
Another tool used in the blueprinting phase is the issues database. This database stores any open concerns and pending issues that relate to the implementation. Centrally storing this information assists in gathering and then managing issues to resolution, so that important matters do not fall through the cracks. You can then track the issues in database, assign them to team members, and update the database accordingly.

ASAP Phase- 3 - Realization:

With the completion of the business in phase 2, "functional" experts are now ready to begin configuring SAP. The Realization phase is broken in to two parts.
1) Your SAP consulting team helps you configure your baseline system, called the baseline configuration.
2) Your implementation project team fine-tunes that system to meet all your business and process requirements as part of the fine tuning configuration.

The initial configuration completed during the base line configuration is based on the information that you provided in your blueprint document. The remaining approximately 20% of your configuration that was not tackled during the baseline configuration is completed during the fine tuning configuration. Fine tuning usually deals with the exceptions that are not covered in baseline configuration. This final bit of tweaking represents the work necessary to fit your special needs.

Configuration Testing:
With the help of your SAP consulting team, you segregate your business processes into cycles of related business flows. The cycles serve as independent units that enable you to test specific parts of the business process. You can also work through configuring the SAP implementation guide (IMG). A tool used to assist you in configuring your SAP system in a step by step manner.

Knowledge Transfer:
As the configuration phase comes to a close, it becomes necessary for the Project team to be self-sufficient in their knowledge of the configuration of your SAP system. Knowledge transfer to the configuration team tasked with system maintenance (that is, maintenance of the business processes after Go-live) needs to be completed at this time.In addition, the end users tasked with actually using the system for day-to-day business purposes must be trained.

ASAP Methodology - Phase 4 - Final Preparation:

As phase 3 merges into phase 4, you should find yourselves not only in the midst of SAP training, but also in the midst of rigorous functional and stress testing. Phase 4 also concentrates on the fine tuning of your configuration before Go-live and more importantly, the migration of data from your old system or systems to SAP.
Workload testing (including peak volume, daily load, and other forms of stress testing), and integration or functional testing are conducted to ensure the accuracy of your data and the stability of your SAP system. Because you should have begun testing back in phase 2, you do not have too far to go until Go-live. Now is an important time to perform preventative maintenance checks to ensure optimal performance at your SAP system.At the conclusion of phase 4, take time to plan and document a Go-live strategy. Preparation for Go-live means preparing for your end-users questions as they start actively working on the new SAP system.

ASAP - Phase 5 - Go-live and Support:

The Go-live milestone is itself is easy to achieve; a smooth and uneventful Go-live is another matter altogether. Preparation is the key, including attention to what-if scenarios related not only to the individual business processes deployed but also to the functioning of technology underpinning these business processes and preparation for ongoing support, including maintenance contracts and documented processes and procedures are essential.