Quantcast
Channel: SCN : Document List - SAP Planning and Consolidation, version for SAP NetWeaver
Viewing all 192 articles
Browse latest View live

An easy process to find and transport SAP BPC 10 objects

$
0
0

Introduction:

I have been working on BPC projects (Implementation/ Enhancement/ Support). In all projects transports collection is a common activity. Many times we keep this activity as a last step. Due to which it takes lot of attention during the objects collection. At times we may miss details of few objects and we wander for details and leading to delay in the Transport collection. But with SAP given features we can over come the delay and try for a quicker collection process.

 

After working for 10 projects in BPC Planning and consolidation projects. I found this is one of the simple processes for collecting object for transportation.

 

 

Step-1. BPC Transports are needed to move changes from DEV to UAT and PROD. The respective system needed to bring offline during the movement of a transport.

  1. Log into BPC Web
  2. Under planning and consolidation
  3. Manage Environments

1.JPG

2.JPG

Step-2.

Go to the GUI logon and type transaction code RSA1

3.JPG

  1. Select the Environment  4.JPG
  2. All the objects in the BPC are collected here.
  3. Use the find option to find any object (BPC) of need ..

5.JPG

 

For Example:

6.JPG

Result: Will show up all the objects with name matching (Just like Google Search)

7.JPG

All files with “lgf” are displayed and can be selected and transported accordingly. (Above Image)

 

Hint: This process will enable us to drill down the objects W.R.T Models. Sometimes we may forget the “object name” but we always remember the model/environment.

 

This rule applies also across the models too.

8.JPG

Once identified the objects. For collecting the objects use the following steps (common process WRT BW)

  1. Navigate to Transport Connections
  2. Double click on Object Types
  3. Navigate to More Types
  4. Double click on ‘Select Objects’ 
  5. Place Curser over checkbox & right click
  6. Select ‘DO NOT TRANSPORT ANY BELOW’ 
  7. Select Objects to be transported
    1. Some objects are located under each component type. (As below)

9.JPG

 

10.JPG

 

Lesson Learned/Achievement: With the above process I could do 1) The objects collection and 2) Transport movement from DEV à UAT à PRODUCTION in 30 mins. I thought this a value addition to our day to day job.


Best Practices For Reporting Against SAP Business Planning and Consolidation (Powered by SAP HANA), utilizing the EPM Add-in for Excel

$
0
0

Welcome to this blog mentionning some key concepts/best practices to have in mind when designing a report/input schedule utilizing the EPM Add-in for Excel on the top of SAP Business Planning and Consolidation 10.0 (powered by SAP HANA). This document will be updated as soon as some new concepts are coming from implemention teams. But before jumping directly to best practices, here's first a quick introduction on this front-end tool.

 

EPM ADD-IN VERSIONS

 

EPM Add-in dotnet 3.5:

  • Supports only Office 32 bits
  • Not limited to certain function
  • Performance hit when querying via ODBO connection
  • Run with .NET 3.5 framework (has 1.2 GB RAM limitation)
  • Can be updated with *.msp file (no need to uninstall/re-install)
  • Master Data downloaded to client (No support of BIG VOLUME (BV) mode)

 

EPM Add-in dotnet 4.0:

  • Supports Office 32 bits or Office 64 bits
  • Possibility to enable the BIG VOLUME mode on BW data sources (ODBO / XMLA connections only). Standard mode or Big Volume mode are supported.
  • In Standard mode, it runs the same features as the 3.5 version.
  • Better performance via ODBO connection
  • Can be updated with *.msp file (have to uninstall/re-install)
  • Supports Office 2010 64 bits to get the most out of client-side performance (no RAM limitation)
  • Supports SAP BW (INA provider) connections (for BPC 10.1 embedded model).

 

Big Volume (ODBO connection)

With the Big Volume mode enabled, you cannot write data back to the database, you can only render data. Additionally, certain specific BPC functions are not available (EPMMemberProperty/EVPRO, etc…), as well as some EPM add-in functionalities (Freeze data refresh, Member Selector – dimension/member properties and Data Ranking/Sorting).

 

The Big Volume mode changes the interface for member selection so that dimension members are displayed in pages instead of the default tree view (SP 07). This improves performance for dimensions with large numbers of members.The BV mode only loads metadata asked for, or required, not all of it.

 

 

BPC 10.1 EMBEDDED MODEL (BW INA provider)

 

Latest Drill-Down performance test results showed interesting improvement when activating "Refresh only Expanded and Inserted Members" option in the EPM User Options. It's recommended to use it as a default setting for BPC embedded customers.

 

 

CONNECTIONS

 

The EPM Add-in is an add-in to Microsoft Office Excel, Microsoft Word, and Microsoft Office PowerPoint and is used to analyze data in the following applications:

 

  • SAP Business Financial Consolidation
  • SAP Business Planning and Consolidations, version for SAP NetWeaver **
  • SAP Business Planning and Consolidations, version for SAP Microsoft **
  • SAP Profitability and Cost Management
  • SAP NetWeaver BW InfoProviders (different ODBO connector for BPC MS and BPC NW version).

 

** Allows write back of data via BPC Web Service connection

 

Local
This connection type is for ODBO connections, (FC SSAS cubes,  FC BW cubes, SSAS standard cubes, BW standard cubes,  PCM ODBO provider, SSM ODBO provider, BPC MS ODBO  provider and SAP BW OLE DB provider. An ODBO connection doesn’t allow data input, and is only used for data retrieval. Data Manager features are also not reachable with an ODBO connection. However, the usage of custom members (to build some complex MDX calculated members) can only be performed with an ODBO connection.

 

Planning & Consolidation
This type is for Web Service connections, on the top of BPC 10 MS and BPC 10 NW.  A Web service connection allows data input, and  the use of the Data Manager ribbon, but does not  support the creation of custom members (only local members).

 

 

FUNCTIONS & FORMULAS

 

  • EPMInsertCellBeforeAfterBlock and EPMCopyRange – big impact on the writing time.
  • Use EPMDimensionOverride instead of manually changing the EPMOlapMemberO function for very large EPM reports.
  • Try to avoid mixing EPM Report, EPMRetrieveData, and EPMCommentFull/Partial since the EPM Add-in does a separate call to the database for each type of function (EPMOlapMemberO, EPMRetrieveData, EPMCommentFull/Partial). And all those queries are not parallelized.
  • Functions DimensionOverride and AxisOverride are costly on loading, especially when a report is huge and have lots of formulas to evaluate.
  • Enter the connection name in all EPM formulas (EPMModelCubeID or a static text cell can be used as reference somwhere in a hidden place of your report).
  • Avoid volatile Excel functions like Row(), Column(), and Offset(), as well as cascade references between cells.

 

REPORT OPTIONS

 

  • Create EPM Reports instead of using EPMRetrieveData function.
  • Parallelization is activated by default with the Axis Sharing feature, and on the same data source/connection. Caution: the Axis Sharing feature is a bit heavy in terms of performance.
  • Try not to use custom Measures such as MTD or WTD (process not optimized).
  • EPM reports will outperform EVDRE reports. EVDRE does have some performance optimization but EPM10 report have 2-3 times better performance for the same layout/behavior.
  • If different member ID’s have the same descriptions, ensure “Use EPMMemberID in EPM formulas” is checked in User Options. Caution: It also has an impact on performance as the EPMemberID formula is automatically inserted for each dimension member. A better option would be to manually create a local member mentionning the EPMMemberID function only where it is necessary (without checking the User Option).
  • In case of input, consider using Calculate Parents in Hierarchies (Sheet Options) for on-the-fly calculation on Parent Node members. But be aware that, even it's a very useful feature, the rendering time is bigger as the EPM Add-in has to insert the SUM function based on the hierarchy definition.
  • Multi selection in Page Axis: the SUM is performed on the client side. Cartesian product is returned from the server and summed up on the front-end.
  • Exclude functionality changes your report from symmetric mode to asymmetric mode. But depending the number of tuples you decide to exclude, performance could be worse.
  • Avoid using complex formulas mixing EPM and Microsoft Excel functions, such as functions that create a dependence (a formula references another formula; a formula creates a dependence between two reports), or functions that include a condition (IF).
  • By default, the EPM Add-in uses the "Insert Method" to write reports. To get around this, enable "Keep Formulas Static that Reference Report Cells" option in the Sheet Options to clear the entire report and re-write it entirely on a refresh.
  • In case of drill-down, check option to "Refresh only Expanded and Inserted Members" in the User Options instead of refreshing the entire report.
  • In case of using the "Collapse" feature, there is a new tag in the FPMClient config file in EPM add-in SP15: "Collapsewithoutqueryingserver". Its default  value is set to "FALSE", but it can be turned to "TRUE" for better performance.

 

 

FORMATTING

 

  • Limit the number of formatting rules and make sure those rules don’t overlap each other.
  • Avoid using Microsoft Excel conditional formatting within an EPM add-in dynamic formatting sheet due to the evaluation of each cell.
  • Avoid overriding rules because the Add-in is formatting the cell multiple times (Pattern for example).
  • Avoid using the CONTENT override because the EPM Add-in is retrieving data twice (writing + calculation).
  • Create several EPM add-in dynamic formatting sheets instead of only one, if possible. For example, Report 1 displays only properties A and B, and Report 2 displays only properties C and D, you should create two different dynamic formatting sheets for  better performance.

 

 

LOCAL vs CUSTOM MEMBERS

 

Local Members

  • Created for the purpose of containing a dynamic formula. An editor screen assists with the creation of the Excel-based formula
  • A local member is specific to a single report
  • Local Members can be made context-sensitive. This is in the options for Local Members

 

Custom Member

  • Custom members are created for the purpose of containing an MDX formula
  • An editor screen is presented to help create the formula
  • Customer members can be used throughout a worksheet
  • Only available for ODBO type connections

 

General note: When data is existing on the spread sheet, choose Local Members. Local Members is  native Excel functionality and will always perform better.

 

 

VBA

 

  • Look for existing API before creating custom code
  • Avoid Loop (Do, While...)
  • VBA Best practice “acceleration” function: the initial variables (“screen updating”, “calculations”, “events”…) must be deactivated and then reactivated to their initial status in the exit of the macro.

 

 

ANALYZING & DEBUGGING

 

Review the How to Debug the EPM Add-in document (https://scn.sap.com/docs/DOC-38755)

 

Determine the source of the issue (client vs. server)

  • Fiddler (3rd party software) measures the network traffic
  • EPM Add-in logs to measure the client traffic
  • ST12 (NetWeaver) – Single Transaction Trace to measure the server traffic. UJSTAT can also be used for analyzing queries on the server.

 

Not sure where an issue occurs?

The Development Team may ask for a trace from the customer. To obtain one, here is the recommended process using Fiddler:

  • In Excel, click ‘More’ ► ‘Clear Metadata Cache’ from the EPM toolbar.
  • Log off from the EPM Add-In.
  • Add the “TRACE” flag to the FPMXLClient.dll-ExcelLogConfig.xml file against the Trace logger.
  • Start the Fiddler tool running to capture web events (“Capture Events [F12]”).
  • Log into the EPM Add-In opening the necessary Connection and Model.
  • Recreate the issue or refresh the report.
  • Stop the capture and save the resulting trace log that it has produced (.SAZ file) and send it to development.

 

Doing this gives Development all of the data and metadata needed to reproduce an issue without the actual data and metadata files from the customer.

 

 

USEFUL LINKS

 

Videos and solutions of EPM Add-in functionality (Comments, Formatting, Macros, Local Members, Miscellaneous Reporting, etc.)

Using different FX-rate sets (multiple R_ENTITY's) in BPC

$
0
0

Hi

 

have never really seen this feature well documented. Had to use it recently in a project, so I thought maybe others would be interested in how we resolved it. Feel free to comment and suggest modifcations/simplifications.

 

Rgds Trond

Creating a csv file in other format than comma for master or transaction data Import

$
0
0

Hi,

 

Sometime we encounter a situation when description or other properties of master or transaction data have comma. In this case while saving data file as csv then that field will split into two different columns of csv file. So this document will help you  to "Creating a csv file in other format than comma for master or transaction data Import".

 

Steps for the same are as below:

 

  1. Open downloaded file in excel. Select column A and Click on “Text To Column” from Data -> Text to Column then check Delimated radiobutton.After this click on next as shown in below screenshot.

 

1.png

 

   2.       Select “Others” check box and specify pipe “|” sign.

 

2.png

 

   3.   Select “Text” in “Column data format”.

 

3.png

 

     4.       Go to “Control Panel -> Region and Language->”

 

4.png

     5. Click on “Additional Settings”.

5.png


     6. Specify “|” symbol  in “List separator”

 

6.png

     7. After apply above setting  Save excel data file as .CSV format.

 

     8. After saving file make List separator back to  “,” (comma) from Control Panel -> Region and Language.

 

 

Best Regards,

Deepak

Using VBA with BPC 10 NW

$
0
0


Hi

 

Since our bread and butter for the BPC is based on Excel, it seems safe to say that we should be taking advantage of the inherit excel functionality to improve BPC performance and user interaction.

 

I am writing this document to explain the some of the basic VBA code that can use used to assist us while building reports and forms.

 

 

To start off, so that we are all on the same page - to open the VBA code window - you have two ways (at least that is what I am aware off). The first way is using the Excel menu > Developer tab > view code and the other way (my favorite) is using hot-key "ALT+F11".

 

To add the developer tab to your excel menu, please follow the below steps.

 

 

Step 1:
Right click on the menu and click on "Customize the ribbon".

 

Adding Developer Tab.jpg

 

Step 2:

 

Check the box next to "Developer" and click on "Ok".

Adding Developer Tab 1.jpg

 

 

You can do a lot more in the developer tab then just going to the "View Code" window, but I am not going to into that in this document.

 

 

 

Now in the VBA code window, you will notice that there are several windows / panes open. The two panes that we are interested are (1) Project Explorer and (2) the code window. If you do not see these panes / windows open, you can add them by going to "VIEW > Project Explorer (shortcut is CTRL+R)" and "VIEW > Code (shortcut is F7)".

 

VBA Window.jpg

 

The left window is the "project explorer" and the right window is the "code window". The left drop-downs in the code window will be used to select the object and the right drop-down will be used to select the event / trigger for that object.

 

For our purposes, we are going to be adding a Module to the our workbook. You can do that by right clicking on the "VBAProject (Book1)" and navigating through Insert > Module.

 

VBA Window 1.jpg

 

The various functions that I have used over the past years have been listed below with their explanations

 

Public Function AFTER_WORKBOOK_OPEN()

End Function

 

This is the first code that will be called after the workbook has been opened. You can insert any code that you need to be executed just after the workbook is opened. Please note that if you had set the "Refresh Data in the whole file when opening it" in the sheet options, that will take happen before this code can be executed.

 

Public Function BEFORE_SAVE() As Boolean

    If variable then

          BEFORE_SAVE = True

    else

          Msgbox "Please use button to save"

          BEFORE_SAVE = False

    End if

End Function


This code will be called every time, EPM attends to send data back to BPC for saving. You can insert code here if you need to perform any validation before the data is sent or you can restrict the user from clicking on the "Save Data" in the EPM tab - similar to the example. I will also show the functions that I used to call the save and refresh at the bottom of this documentation.


Public Function BEFORE_REFRESH() as Boolean


    If variable then

          BEFORE_REFRESH = True

    Else

          Msgbox "Please use button"

          BEFORE_REFRESH = False

    End if


End Function


This code will be called every time, EPM attends to refresh the report or form. You can insert code here if you need to perform any action before EPM refreshes the sheet or you can restrict the user from clicking on the "Refresh" in the EPM tab - similar to the example. I will also show the functions that I used to call the save and refresh at the bottom of this documentation.

 

Public Function AFTER_REFRESH() as Boolean

    AFTER_REFRESH = True

End Function

 

This code is called after the refresh has occurred. You can insert the code to perform any action or call to another function within the function definition.

 

As promised, the two functions that I have created to save and refresh my workbook are as follows

 

Public Sub MySave()

    blnMySave = True

    EPM.SaveAndRefreshWorksheetData

      blnMySave = False

    'MsgBox "Saved"

End Sub



Public Sub MyRefresh()

    blnMyRef = True

    EPM.Refresh

    blnMyRef = False

    'MsgBox "Refreshed"

End Sub

 

 

Please note that I have defined 1 object (EPM) at the top of the module.

 

Dim EPM As New FPMXLClient.EPMAddInAutomation


I hope this document was helpful.


Thanks,


Ranjeet

Automated DM Execution Based on Dynamic Selections

$
0
0
Introduction

One of the biggest short coming when it comes to loading transaction data in BPC is lack of variable based selection. This document is an attempt to address this issue by automating the Data Manager execution using values as desired during run-time. There are certain discussions around the same but I thought of sharing a working solution that could be helpful for others as well.

 

Software Components

The scope of this document is restricted to executing a Data Manager package based on a simple business scenario.

 

Software Components

- SAP BPC 10.0 on SAP NW BW(7.31)

- EPM Addin - Version 10 SP21 .NET4

 

Business Case

You are required to automate the transaction data load from BW to BPC. The planning cycle is monthly. Data Manager package must execute for current period without any manual intervention every month.

 

Approach

There are various ways to execute a Data Manager Package. Some of them are:

 

However, none of these approaches can enable a variable based execution of a DMP. We are considering a case where we have to load data every month for the current period without manual intervention. In BW data loads, we can select variables or write selection routines in DTP/InfoPackages to do that. BPC does not provide variable based selections for data load out-of-the-box. BPC also lacks in terms of selection options like "not equal to" or wild card.

 

What we will be trying is to write a program with a selection screen and the program itself will generate the answer prompt which can then be passed along with other values are required in the program UJD_TEST_PACKAGE.

 

Here is how the selection screen would look like:

Selection Screen.jpg

For the purpose of explanation - Fiscal Year/Period has been included in the selection screen. It can be used behind the scenes as well. Just to make the program a little more flexible, an option for transformation file selection is also added. This will help to test different transformation files if required.

 

We will be loading data for current period in our planning model. Here is what the program is doing:

The following declaration will be required to generate answer prompt. This class contains horizontal tab separation as we have declared below.

CLASS cl_abap_char_utilities DEFINITION LOAD.

CONSTANTS: c_tab TYPE c VALUE cl_abap_char_utilities=>horizontal_tab.

Selection screen:

SELECTION-SCREEN BEGIN OF BLOCK blck1.

PARAMETERS: bpc_env TYPE uja_appl-appset_id OBLIGATORY LOWER CASE,

             bpc_mod TYPE uja_appl-application_id OBLIGATORY LOWER CASE,

             bw_iprov TYPE rsinfoprov OBLIGATORY LOWER CASE,

             bpc_pak TYPE uj_package_id OBLIGATORY LOWER CASE,

             bpc_trfl TYPE uj_string OBLIGATORY LOWER CASE,

             bpc_tid TYPE /bi0/oifiscper OBLIGATORY."Selection field

SELECTION-SCREEN END OF BLOCK blck1.

We are initializing the bpc_tid which is the variable for fiscal period with the current period.

INITIALIZATION.

CONCATENATE sy-datum+0(4) '0' sy-datum+4(2) INTO bpc_tid.

Generating answer prompt:

The answer prompt as per thread mentioned earlier has to look like below:

DM Log.jpg

We will define string variables for each of these lines and later concatenate them as a single answer prompt.

It would be better to ensure that only required fields are selected in the DMP so that it is easier to generate string.

 

Note - Low value for 0FISCPER - It is bpc_tid.

CONCATENATE '%InforProvide% ' c_tab bw_iprov cl_abap_char_utilities=>cr_lfINTO ip_sel .


CONCATENATE '%SELECTION%' c_tab '<?xml version="1.0" encoding="utf-16"?><Selections xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">' INTO bpc_sel.

CONCATENATE bpc_sel '<Selection Type="Selection"><Attribute><ID>0FISCPER</ID><Operator>1</Operator><LowValue>' bpc_tidINTO sel_fld.

CONCATENATE sel_fld '</LowValue><HighValue /></Attribute><Attribute><ID>0VERSION</ID><Operator>1</Operator><LowValue>001</LowValue><HighValue /></Attribute></Selection>' INTO sel_fld.

CONCATENATE sel_fld '<Selection Type="FieldList"><FieldID>0ACCT_TYPE</FieldID><FieldID>0CHRT_ACCTS</FieldID><FieldID>0COMP_CODE</FieldID><FieldID>0COSTCENTER</FieldID>' INTO sel_fld.

CONCATENATE sel_fld '<FieldID>0CO_AREA</FieldID><FieldID>0DOC_CURRCY</FieldID><FieldID>0FISCPER</FieldID><FieldID>0FISCVARNT</FieldID><FieldID>0GL_ACCOUNT</FieldID>' INTO sel_fld.

CONCATENATE sel_fld '<FieldID>0PCOMPANY</FieldID><FieldID>0PROFIT_CTR</FieldID><FieldID>0VERSION</FieldID></Selection></Selections>' cl_abap_char_utilities=>cr_lf  INTO sel_fld.

    CONCATENATE '%TRANSFORMATION%' c_tab  '\ROOT\WEBFOLDERS\ENVIRONMENT\MODEL\DATAMANAGER\TRANSFORMATIONFILES\' bpc_trfl'.XLS' cl_abap_char_utilities=>cr_lf INTO sel_trfl.

       CONCATENATE '%TARGETMODE%' c_tab '1' cl_abap_char_utilities=>cr_lf INTO sel_tm.

       CONCATENATE '%RUNLOGIC%' c_tab '0' cl_abap_char_utilities=>cr_lf INTO sel_rl.

       CONCATENATE '%CHECKLCK%' c_tab '0' INTO sel_cl.

 

CONCATENATE ip_sel sel_fld sel_trfl sel_tm sel_rl sel_cl INTO aprompt.

Notice that each variable is separated by c_tab as we have declared earlier. Also, at the end of each line of answer prompt, we have cl_abap_char_utilities=>cr_lf. This is required as this is how lines are separated by SAP program.

 

Now that the answer prompt is ready, we have to feed the selection screen of UJD_TEST_PACKAGE.

SUBMIT ujd_test_package

         WITH appl = bpc_mod

         WITH appset = bpc_env

         WITH group = bpc_group

         WITH if_file = uj00_cs_bool-no

         WITH if_msg = uj00_cs_bool-no

         WITH if_sync = uj00_cs_bool-yes

         WITH package = bpc_pak

         WITH prompt = aprompt

         WITH schedule = '<SCHEDULING><IMMEDIATE>Y</IMMEDIATE><STATE>0</STATE><PERIOD>N</PERIOD></SCHEDULING>'

         WITH team = bpc_team

         WITH user = sy-uname

         EXPORTING LIST TO MEMORY AND RETURN.

You can incorporate more flexibility and options based on your ABAP expertise and knowledge. When you execute this program, it will trigger the DM package and all the selections will be taken from the selection screen.

 

DMP Execution Log.jpg

I have attached the sample code. You will need to modify it based on your source InfoCube/DSO, selection parameters and anywhere else specific to your case.

 

Now you can schedule your DMP to load for the current period at the time of data load without any manual intervention.

Property values upload to Time Dimension

$
0
0

Hi Friends,

 

I want to maintain Geography property values in CostCentre Dimension. I have big list of CostCentres so its difficult to maintain one my one. Is there any option where I can upload the property values based on Cost Centre ID ?

 

Thank you,

Kumar

Data Flow in BPC Embedded 10.1 - White Paper

$
0
0

SAP BPC EMBEDDED 10.1: Data Flow with in the embedded system and its influence from Characteristic relationship

Understand the basic character of the data behavior when drawn on to the report, when transferred between cubes and when inserted using Input schedules.

Key Concept:

In BPC 10.1 Embedded the data is stored in the BI info-providers and is directly fed into the BPC reports with the help of Bex Queries, unlike in the other versions of BPC where the data is stored in a separate name space provided for BPC. Accordingly BPC Embedded 10.1 opens up the scope to use Business explorer along with the characteristic derivation. The new concept of characteristic derivation combined with Bex and BPC reporting has its own behavior and this document has attempted to understand the impact on data flows.

We use characteristic relationships to link characteristics that have similar content. You can use characteristic relationships to define rules to check permitted combinations of characteristic values for real-time InfoCubes. You can also define rules for the system to use to derive values from one characteristic for another characteristic. This is useful, for example, if you want the derivable characteristics to be available for further analysis. To illustrate I have taken scenarios from real life project implementation to understand the how the data looks in the source and how it comes up on the other side.

We can analyse the char relationship from the following view points

  1. 1) When data is sent from Input schedule to Cube
  2. 2) When data is drawn into the reports from Cube
  3. 3) When data is transferred between cubes

 

Data sent from Input schedule to Cube:

 

When the data is sent from a query to the cube, and the query is built on an aggregation which has few info objects, then the data in other info objects will be derived if a char relationship is established between them.

 

For example: Please find below that some data was submitted through a input schedule and please observe that there is no FERC Account, PCAT, Company code and Refund code in the aggregation.

DF 1.JPG

DF 2.JPG

 

 

Please find below that in RSPLAN we have established relationship between Order number and company code, Plan category, refund code, and FERC account.

 

DF 3.JPG

 

 

We can see from the below data that the data in FERC account Company code, PCAT and Refund code is automatically derived.

 

DF 4.JPG

 

 

On the other hand if we have all the info object in the aggregation and also the characteristic relationship is established then the data will get validated, when the data is sent to the cube and only the right combination will be allowed to be written into the cube. Please find from the below screen shot the invalid combination is rejected and an error is thrown.

 

DF 5.JPG

 

 

 

 

Data drawn into the reports from Cube

 

When we do not have Info object in the aggregation then the data in the cube sums up when pulled up in a report as per the following example:

 

Data in the CUBE: Please find below where we can see that there are different internal orders (000300482502, 000300482503) at the line item level

 

 

DF 6.JPG

 

 

 

Please find below that the aggregation does not have order number

 

DF 7.JPG

 

 

Data in the Report: Please observe that all the data is pulled up into the report and the query from the above aggregation sums up the internal orders


DF 8.JPG

 

 

Data transferred between cubes:

 

For the data to be transferred in between cubes then we need to build an aggregation on the multi provider which will have the respective cubes in it. On this aggregation we need to build a planning function giving from and to as the respective cubes.

 

DF 9.JPG

 

 

 

  1. When we are transferring the data from one cube to another on a specific aggregation then the data will get derived for the info object which is not present in the aggregation. If it should not derive then we need to de activate the char relationship in the receiving cube

 

For Example : We have the following

 

Sending Cube

Aggregation

Receiving Cube

Account

Account

Account

Category

Category

Category

Internal order

Internal Order

Internal order

Budget code

 

Budget code

 

 

If we have relationship between Internal order and budget code then the data will be derived into receiving cube for budget code

 

If there is no relationship then the data in budget code will not be transferred

 

 

 

     2. The data will be automatically validated in the receiving cube and only such data will be imported. Other non-validated data will be rejected.

 

 

     3. When transferring data from one cube to another and the data is not to be derived or validated then we need to deactivate the char relationship using a program as below, deactivation and activation needs to be executed before and after the copy function respectively

 

 

Logic for the program is as below:

 

Function to de activate characteristic relationship in a cube.

 

 

DF 10.JPG

 

 

DF 11.JPG

Function to activate characteristic relationship in a cube.

 

DF 12.JPG

 

 

DF 13.JPG

 

4. The data will not be transferred to the cube if the sending cube has an info object which is not there in receiving cube. If we will have to send such data then the aggregation should not include such info object.

 

 

5. When the data is copied from one cube to another then the existing data in the receiving cube will be over written.

 

Authors

GNANEESH REDDY

Gnaneesh Reddy, a Senior Business Consultant at PwC SDC – India. He is qualified chartered accountant, certified BPC 10 & holds a diploma in IFRS professional with experience in diverse roles in Accounting and SAP BPC. Further he has hands on experience in Auditing & Accounting including Auditing, Finalization of Accounts, Investigations & Income Tax Consulting. He has an overall experience of 6 years in the IT industry spanning design, development, testing and support of system/business applications on client/server and multi-tiered architecture models using SAP BPC technology.

       

SATYA TEJA MOVVA

Satya Teja Movva, a Senior Software Engineer with PricewaterhouseCoopers SDC - India is a SAP professional in BI. He holds a Bachelors degree in Engineering and has five years and five months of experience in SAP Business Intelligence Warehousing. He has experience of working on different versions of SAP BO XIR3, SAP BI 4.0, SP 4.1. As a SAP BO consultant he has executed five projects that include end-to-end implementation, enhancement and support projects. He has good work experience in ETL and expertise in BO Web Intelligence, and SAP Lumira. He has implemented reporting on SAP Bex queries, SAP HANA calculation views and other relational sources.

 

VIPIN JAIN

Vipin Jain, a Senior Business Consultant at PwC SDC – India. He is qualified chartered accountant with Dimpoma in Information Systems Audit and certifications in SAP HANA 100, BPC 10.0 and PMP.  He has more than 13 years of experience in Global Consolidation, Planning, Business Analysis and Project Execution. He has comprehensive knowledge of key business processes underlying complex planning scenarios, IAS, IFRS, US GAAPs. He has an overall experience of 9 years in the IT industry spanning design, configuration, development, enhancements, roll outs, testing and support of business applications on multi-tiered architecture models using SAP Technology.


Locking Functionality in BPC Embedded 10.1

$
0
0

SAP BPC EMBEDDED 10.1: LOCK Functionality

Learn how the locking function is configured and what determines the optimum combination. Understand the basic types of locking and the locking that is used in Embedded 10.1 version.

Key Concept:

Both BPC and IP uses RSPLSE to locking mechanism.

In BPC Standard locking function works on a Stateless server state and BPC Embedded works on the Stateful. In the stateless server, only save on DB changed records are locked, only for the saving time period. And any data saved last will be overwritten. In stateful server the locked data region is based on the filter and BW query definition. In this case the data can only be changed by one user at a time for the specified data region.

Locking Issue and the related T Codes:

The locking function can be rightly viewed in the “RSPLSE” T code. Usually when we do not use the filters in the query the whole of the data set in the cube is set as specified data region and is locked. Thus we will get the below message when the second user tries to open any input schedule built on the respective cube.

 

LF1.JPG

If we need to delete the lock then we can achieve it from the T-Code SM12. In SAP lock management (transaction SM12), you use table RSPLS_S_LOCK to display the compressed lock records. To find the actual locked selections, you need to call transaction RSPLSE

In order to avoid locking we need to do the following setting in the T Code RSPLSE

RSPLSE has got five tabs as follows

  • Lock Table: Here you specify where the lock table is stored
  • Lock Characteristics: Here you specify which characteristics are relevant for lock checks
  • Lock: Here, you display locks for a specific InfoProvider and user
  • Lock Conflict: The system displays information about the last lock conflict
  • Master Locks: You can display and delete master locks here

 

Locking Conflict 

Assume user U1 has acquired an exclusive lock for data region F1 (filter). A user U2 working on overlapping data regions F2 protected by exclusive locks (data in change mode) will get a lock conflict. So the first user U1 acquires the resource and can change/create data in the date region defined by the filter F1. The second user U2 cannot change data for all filters F overlapping with filter F1: a query will be switched to display mode, a planning function will send an error message.

The above statement can be well understood with the below example.

EXAMPLE 1: We have created two queries

  1. 1) ZALCAP004_TESTLOCKING2 – Where the company code 2100 is hard coded for the filter
  2. 2) ZALCAP004_TESTLOCKING – Where the company code 2200 is hard coded for the filter

 

LF2.JPG

 

LF3.JPG

When different users execute the same query and when the lock characters are not configured the screen locks and when we check the locking conflict tab we can see as below.

 

LF4.JPG

Now we will set the lock characteristics. We will set company code as lock relevant as below

 

LF5.JPG

Now we can see when the reports are executed the locking does not happen and we can also observe that the relevant locks are placed for the respective users

 

LF6.JPG

In continuation to the above selection of company code as lock relevant, we have selected four more characteristics (i.e. fiscal year, version, cost center and cost element) If a number of users are performing planning for the same year and the same cost elements, the Cost Center and Version characteristics are sufficient to ensure that different users selection do not overlap.

Navigation attributes as lock characteristics

In expert mode, you can also maintain navigation attributes as lock characteristics. However, navigation attributes are not relevant for locks in the default setting. They are always locked completely. The reason for this is that attribute values may change in a planning session. Otherwise, different users could edit the same objects.

Example:
User 1 plans product P1 that is in product group (navigation attribute) PG1. PG1 is selected in the selection table and there is no restriction on the product. If the lock occurs using the product group, the following may happen: User 2 plans product group PG2. First, user accesses the program and the data is locked. User 2 changes the attribute of product P1 from PG1 to PG2. He saves the data and accesses planning. Since PG2 and PG1 are now formally disjunct, a lock conflict occurs. Both users can edit the same data.

Therefore, if you want to use navigation attributes as lock characteristics, you must ensure from an organizational point of view that these cannot be changed during planning. This applies to manual changes or changes that are caused by Warehouse Management processes.

Further in conclusion the size on the lock server is restricted and it is recommended to keep the lock table smaller to increase the response time of the lock server. As the size of the lock table is dependent on the lock relevant characteristics it is recommended to keep the lock relevant characteristics to its minimum.

 

Related Notes:

Above are the few important points to be kept in mind. For further information please refer to the note number :0000928044

Authors

 

GNANEESH REDDY

Gnaneesh Reddy, a Senior Business Consultant at PwC SDC – India. He is qualified chartered accountant, certified BPC 10 & holds a diploma in IFRS professional with experience in diverse roles in Accounting and SAP BPC. Further he has hands on experience in Auditing & Accounting including Auditing, Finalization of Accounts, Investigations & Income Tax Consulting. He has an overall experience of 6 years in the IT industry spanning design, development, testing and support of system/business applications on client/server and multi-tiered architecture models using SAP BPC technology.

 

SATYA TEJA MOVVA

Satya Teja Movva, a Senior Software Engineer with PricewaterhouseCoopers SDC - India is a SAP professional in BI. He holds a Bachelors degree in Engineering and has five years and five months of experience in SAP Business Intelligence Warehousing. He has experience of working on different versions of SAP BO XIR3, SAP BI 4.0, SP 4.1. As a SAP BO consultant he has executed five projects that include end-to-end implementation, enhancement and support projects. He has good work experience in ETL and expertise in BO Web Intelligence, and SAP Lumira. He has implemented reporting on SAP Bex queries, SAP HANA calculation views and other relational sources.

 

 

VIPIN JAIN

Vipin Jain, a Senior Business Consultant at PwC SDC – India. He is qualified chartered accountant with Dimpoma in Information Systems Audit and certifications in SAP HANA 100, BPC 10.0 and PMP.  He has more than 13 years of experience in Global Consolidation, Planning, Business Analysis and Project Execution. He has comprehensive knowledge of key business processes underlying complex planning scenarios, IAS, IFRS, US GAAPs. He has an overall experience of 9 years in the IT industry spanning design, configuration, development, enhancements, roll outs, testing and support of business applications on multi-tiered architecture models using SAP Technology.

VBA created named ranges to make lookups from your data sheet easier

$
0
0

Performance is the most critical factor for our users when they refresh reports. We’re on SAP BPC 10.1 for NW on HANA but we still rely on streamlining the client side process to improve the user experience. We’ve found best way to accomplish this is to consolidate information into a single report and leverage native excel functionality to lookup data from the report, instead of multiple reports across multiple tabs.  Our biggest example of this is a packet of over 20 reports, each on its own excel sheet, all fed by a single EPM report that refreshes in about 15 seconds.

 

While this has proven to be the optimal design for our real-time on demand reporting packets, accessing the report data from the data tab can be troublesome specifically if the report is dynamic and members change depending on selected context.  This is the problem I sought to standardize, and make data tabs easier to utilize.

 

Incorporate the VBA code below in any excel workbook and it will automatically create excel named ranges for the Rows, columns, and data section of the report that can be leveraged to access the data contained in the EPM Reports.  Each named range correlates to the report ID such as Report001 will have a RowRng001, a ColRng001, and a DataRng001.

 

AR_CREATE_RPTRNG_VBA_MODULE.JPG

 

Take the example report below. The ranges created are colored to show the result of the VBA code.

REPORT_EXAMPLE.JPG

 

This data is now accessible using the following index match formula shown below, and when locked on the reference rows and columns, the formula can be copied and pasted to provide the data for the report.

 

INDEX_MATCH EXAMPLE.JPG

 

In a more complex report, multiple rows or columns are used to create the right intersection of data, even in those circumstances this can be leveraged.  As in the example below I've used a local member to concatenate the members to create a lookup within the data section of the report.

 

REPORT_EXAMPLE2.JPG

 

The data can be accessed by matching on the first column of the data range as shown by the formula below. (note: I forgot to include the iferror wrapper on this version of the excel function)

 

INDEX_MATCH EXAMPLE2.JPG

 

Feel free to leverage this code as it has made building reports much simpler.

 

Anyone diving into the code might also see Include functionality. Here is a brief overview of that: Complex formulas that are not stored and need to be calculated on the fly are better done client side.  (We dream of the day when HANA MDX can be incorporated into our models and not cause a performance hit)  Similar to the function EPM Copy range, sometimes we want our data to include extra formulas across the entire report.  The Include functionality built into this VBA will do that.  Formulas found in a named range such as Include001, will be incorporated into the report as the names are built to make it easier to access these calculations.  Note: names are expected to be given just above the include range.  See picture below:


INCLUDE000_EXAMPLE2.JPG

How to resolve MDX Statement error. Error occurred when starting the parser.Error when opening an RFC connection (Function).

$
0
0

It is possible that the RFC failes due to the OS reching the limit on maximum number of processes.


First need to check the ulimit values on the OS system: For this you can refer KBA "2048826".

 

To do so in the command prompt type "ulimit -a" as shown below values must be appear in your system.

 

ulimit -a


time(seconds) unlimited

 

file(blocks) unlimited

 

data(kbytes) unlimited

 

stack(kbytes) 4194304

 

memory(kbytes) unlimited

 

coredump(blocks) unlimited

 

nofiles(descriptors) unlimited

 

threads(per process) unlimited

 

processes(per user) unlimited

 

Note:- For windows server the above check can resolve the issue. But for AIX system you need to follow the below steps to resolve.

 

 

On AIX there is a second limit for processes, which is set to 128 on your server:

 

[3]lsattr El

 

sys0 a

 

maxuproc

 

maxuproc 128 Maximum number of PROCESSES allowed per user True.

 

This maxuproc should be increased to atleasst 1024. See note 1972803 SAP on AIX:


Recommendations Section 3 for detailes:

 

"o Maximum number of processes for each user in the system:

 

This value (maxuproc) can be checked using the command 'lsattr El

sys0 a

maxuproc' and is often set too low at 128. You should use the

command 'chdev l

sys0 a

maxuproc=1024' to set the value to at least

1024."

 

Please increase the maxuproc to resolve this issue.

 

 

 

 

 

Regards,

 

Aravind

How to resolve if Not able to Edit Members of a Dimension after PROCESSING it.

$
0
0

Please follow the below steps to create a local program at backend in order to clean up the dirty data.

 

Step 1. Open t-code SE80 and select 'Local Objects' in tab 'Repository Browser'.

 

Step 2. Use mouse to rigth click on the root folder and select 'Create' then 'Program', input an program name such as 'ZCLEAN_DIRTY_DATA', leave the check box of 'Create with TOP Include' unchecked, then click ok.

 

Step 3. Copy the following line into the program               delete from rsbpc_web_up where user_id = '{USER_ID}' and category = '{CATEGORY}'.

 

Step 4. Replace the place holder {USER_ID} with the user name that met the column missing issue, for instance, USERA.

 

Step 5. Find the correct value for the place holder {CATEGORY} following below steps 5.1 - 5.4.

 

Step 5.1 Open t-code SE16 in a new session and open table RSBPC_WEB_UP.

 

Step 5.2 Set USER_ID with the user that met the column missing issue, for instance, USERA. Set NAME with the value colSeq. The value for NAME is case-sensitive. Then excute or press F8.

 

Step 5.3 In the column 'CATEGORY', find the correct value for {CATEGORY}. The format of CATEGORY value should be 'members_{ENVIRONMENT_NAME}_{DIMENSION_NAME}'. For instance, if the missing column issue happens in the dimension 'ENTITY' of the environment 'ENVIRONMENT_TEST', the value of CATEGORY should be 'members_ENVIRONMENT_TEST_ENTITY'. If you find this value in the column 'CATEGORY', please double click on it to open it. Please note that if there is no such value for the dimension that has the error, this note may not work.

 

Step 5.4 Copy the value of CATEGORY, for instance, 'members_ENVIRONMENT_TEST_ENTITY', and replace the place holder '{CATEGORY}' in the local program. After doing it, the program should be like following:

delete from rsbpc_web_up where user_id = 'USERA' and category = 'members_ENVIRONMENT_TEST_ENTITY'.

 

Step 6. Activate the program and excute, then re-log on web client to verify if the missing column appears again.

 

After implementing the above steps you are able to edit the members of the Dimension which are previously locked.

 

 

Note:- If the above steps are not able to resolve the issue please go through theSAP Note: -2216722  .

 

           And also check with the BPC & BW support packages as per the SAP Note.



Regards,


Aravind

Housekeeping Jobs_SAP BPC NW

$
0
0

Performance is the key factor in SAP BPC and lot of tables keep growing as time passes.

Wanted to document all house keeping jobs at one place.

1.BPC Statistics.

Always switch off parameter BPC_STATISTICS in SPRO settings after BPC performance statistics trace.

Execute program UJ0_STATISTICS_DELETE via transaction code SA38/SE38 to delete obsolete statistics or schedule it as background job as per note 1648137.

 

2.UJBR Backup and Restore:

As best practice should take backup with UJBR T code weekly once.

We can schedule this job weekly or if you have more than one environment and wanted to run back up jobs, you can create a process chain with the program UJT_BACKUP_RESTORE_UI.

By selecting  “Execute Backup” radio button we can take full backup of the environment. In the event of data loss due to any reason, we can restore the environment by selecting “Execute Restore” button.

 

3.Exports Jobs:

In the UJBR backup we take full back up weekly. But usually we work mostly on one category (Plan/forecast).Most of the times we may need to restore particular category data for a month or two for some selections. But we don’t have option restore a particular set of data in UJBR restore.

If we take exports and save it in the server, we can import based on our selections.

With "Export Transaction Data to File" DMP we can export the data.

We can import the data with the below 2 DMPs per our requirement.

a.Import Transaction Data Aggregate Overwrite Mode.

b.Import Transaction Data (Last Overwrite Mode).

 

4.LO Job:

Lite Optimization process helps to move transaction data from F fact table to E Fact table apart from other activities (i.e., Rollup, Statistics update, Closing the open requests).

This should be scheduled everyday night during off business hours. It improves query performance.

We can switch on Zero elimination in the /CPMB/LIGHT_OPTIMIZE process chain.

Else we can check “with Zero elimination” check box in Cube Manage tab in BW system.

The first option is applicable for all the models in a system, but the second option is for the specific cube (Model).

 

5.Zero Elimination:

If Zero elimination is not switched on due to any reason, if you want to eliminate Zero records from the system, you may use “RSCDS_NULLELIM” program.

 

6.Audit tables House Keeping:

In most of the cases we have audit logs enabled for Administration activity and User Activity.

Based on the purge frequency we have given, these tables don’t purge automatically.

We need to schedule the DMP - BPC: Archive Data(/CPMB/ARCHIVE_DATA) regularly. Based on the purge frequency we have given in audit functionality audit data moves from Audit data table to Archive table.

From archive table we can delete  with “UJU_DELETE_AUDIT_DATA” program.

For Administration activity logs we need to use “BPC: Archive Activity” ( /CPMB/ARCHIVE_ACTIVITY) DMP.

This DMP moves data from UJU_AUDACTDET to UJU_AUDACTDET_A ; UJU_AUDACTHDR ,UJU_AUDACTHDR_A table based on the selection given in the DMP. We can delete the data from UJU_AUDACTDET_A , UJU_AUDACTHDR_A  with the help of SE14 functionality.

 

7. Comments and Journals House Keeping:

If Comments are enabled and using journal entries, you may use BPC: Clear Comments(/CPMB/CLEARCOMMENTS), BPC: Clear Journal Tables(/CPMB/CLEAR_JOURNALS) DMPs.

 

8.BALDAT,BALHDR,BAL_INDX:

With the help of SBAL_DELETE program or SLG2 we can delete the application logs which are older than 1 year or as per our requirement.

 

9.UJF_DOC, UJF_DOC_CLUSTER,UJD_STATUS tables:

UJF_DOC table contains transformation files, conversion files, script logics and other documents.

And the flat files which were generated by exports jobs and files uploaded for copy jobs apart from logs generated by DMP execution.

We can delete the unwanted files, reports from Data Manager/EPM tabs in Excel.

UJF_DOC_CLUSTER, UJD_STATUS tables contain DMP execution logs details. UJF_FILE_SERVICE_DLT_DM_FILES, UJF_FILE_SERVICE_CLEAN_LOGS can be used to delete the Data from these tables.

We can delete the entries in UJF t code manually as well.

 

10.Work Status Tables:

Some times we will have entries in the system for obsolete transaction data.(You have locked data for 2010 year and after some time,you have deleted the transaction data.But work status table still contains the data for 2010).

Implement 2053697 note and run UJW_WS_TEST program.If you don't give selection here all work status entries will be deleted.


References:

1.1470209 - BW report RSCDS_NULLELIM on Info Cube without time dimension

2.1934038:housekeeping of table UJ0_STAT_DTL

3.1705431 - Planning and Consolidation 10.0 NW - House keeping

4. 195157 - Application log: Deletion of logs

5.1908533 - BPC File Service Cleanup Tool

6. 2053697 - ABAP report to remove obsolete work status for data region

7. http://scn.sap.com/thread/3887031

How To Use the BPC Mass User Management Tool SAP Business Planning and Consolidation 10.1 NW Standard Version

$
0
0

This guide will introduce BPC User Management Tool for SAP Business Planning and Consolidation, 10.1 NetWeaver platform (Standard version) and all its associated functions and features.

 

View Document

How-To: Write default.lgf

$
0
0

In this article I decided to accumulate some knowledge regarding default.lgf scripts.

 

Purpose of default.lgf:

 

To perform calculations triggered by user data send by some input schedule or journal. It can be also launched (if user selects the option) at the end of some standard DM chains (Copy, Move, Import, etc..).

 

For DM chains like DEFAULT_FORMULAS used to run scripts the default.lgf is NOT triggered.

 

Scope of default.lgf

 

When launched the default.lgf will receive scope as a combination of all members of all dimensions of data sent by user and actually saved to the cube. If some records are rejected by write back or validation badi then the scope of default.lgf will not contain the rejected members combination.

Example:

 

Dimension: DIM1

Members: D1M1, D1M2

 

Dimension: DIM2

Members: D2M1, D2M2

 

Input form (all intersections in the cube has value 1):

 

 

The user decided to change value in the cells marked with yellow to 2:

 

 

2 values (D1M1,D2M1) and (D1M2,D2M2) will be sent to the cube.

 

As a result the scope will be a combination of the following members: D1M1,D2M1,D1M2,D2M2

Generating 4 possible combinations:

 

Sent by user: (D1M1,D2M1); (D1M2,D2M2) and extra: (D1M2,D2M1) and (D1M1,D2M2)

 

4 values will be processed by default.lgf.

 

If the default.lgf is like:

 

*WHEN ACCOUNT //or any dimension

*IS * //any member

*REC(EXPRESSION=%VALUE%+1)

*ENDWHEN

 

The result will be:

 

 

It means, that some extra combinations of members will be processed by default.lgf, not only changed data.

 

General rules:

 

1.Don't use *XDIM_MEMBERSET/*XDIM_ADDMEMBERSET in the default.lgf, do not redefine the scope. The original scope (not huge by the way) have to be processed.


2.Use *IS criteria in *WHEN/*ENDWHEN loop to select members for some calculations.

 

Sample:

 

For DM package script the code is like:


*XDIM_MEMBERSET SOMEDIM=%SOMEDIM_SET%  // member from user prompt - MEMBER1 or some fixed member

*WHEN SOMEDIM

*IS * // scoped in *XDIM_MEMBERSET

*REC(...)

*ENDWHEN

 

For default.lgf the code will be:

 

*WHEN SOMEDIM

*IS MEMBER1 // fixed member - condition to perform calculations in REC

*REC(...)

*ENDWHEN

 

3.*XDIM_FILTER can be used sometimes to narrow the scope, but the benefit of filtering against *IS is not clear.

 

Example:

 

ACCOUNT dimension contains 3 members: ACC1,ACC2,ACC3

 

*XDIM_FILTER ACCOUNT = [ACCOUNT].properties("ID") = "ACC1"

// The incoming scope will be filtered to ACC1 if present

*WHEN ACCOUNT

*IS *

*REC(EXPRESSION=%VALUE%+1) // +1 for ACC1

*ENDWHEN

 

*XDIM_MEMBERSET ACCOUNT=%ACCOUNT_SET%

// Filter is reset, %ACCOUNT_SET% contains original scope

*WHEN ACCOUNT

*IS *

*REC(EXPRESSION=%VALUE%+2) // +2 for ACC1,ACC2,ACC3

*ENDWHEN

 

*XDIM_FILTER ACCOUNT = [ACCOUNT].properties("ID") = "ACC2"

// The incoming scope will be filtered to ACC2 if present

*WHEN ACCOUNT

*IS *

*REC(EXPRESSION=%VALUE%+3) //+3 for ACC2

*ENDWHEN

 

User send 1 for all 3 accounts (ACC1,ACC2,ACC3). The result is:

 

ACC1: 4

ACC2: 6

ACC3: 3

 

You also have to be on some recent SP level for *XDIM_FILTER to work correctly (read notes - search on "XDIM_FILTER")

 

If you have to calculate some function, like:

 

Result = Func([SomeDim].[Member1],[SomeDim].[Member2],..,[SomeDim].[MemberN]) (N members total)

 

And store the Result in some member, then you have to write N *WHEN/*ENDWHEN loops to prevent aggregation if more then 1 member is in scope. Without multiple loops the result will be multiplied M times, where M is number of different members sent by input form simultaneously.

 

Example (multiply 3 members):

 

*WHEN SomeDim

*IS Member1

*REC(EXPRESSION=%VALUE%*[SomeDim].[Member2]*[SomeDim].[Member3],SomeDim=ResultMember)

*ENDWHEN

 

*WHEN SomeDim

*IS Member2

*REC(EXPRESSION=%VALUE%*[SomeDim].[Member1]*[SomeDim].[Member3],SomeDim=ResultMember)

*ENDWHEN

 

*WHEN SomeDim

*IS Member3

*REC(EXPRESSION=%VALUE%*[SomeDim].[Member1]*[SomeDim].[Member1],SomeDim=ResultMember)

*ENDWHEN

 

In this example the REC line can be the same for all 3 loops (%VALUE% can be replaced by direct member reference):

 

*REC(EXPRESSION=[SomeDim].[Member1]*[SomeDim].[Member2]*[SomeDim].[Member3],SomeDim=ResultMember)

 

with minimum performance decrease.

 

Using LOOKUP to the same model to get expression argument member

 

In some cases for simple formula like multiplication of 2 members (price * qty), but with long list of members, LOOKUP can be used:

 

Lets assume we have members in dimension SomeDim:

 

Price1, Price2, Price3, Price4

Qty1, Qty2, Qty3, Qty4

 

Result have to be written to:

Amount1, Amount2, Amount3, Amount4

 

Then we can add for dimension SomeDim properties: MULT, RESULT and TYPE and fill it:

 

ID           MULT     RESULT    TYPE

Price1    Qty1       Amount1    Price

Price2    Qty2       Amount2    Price

Price3    Qty3       Amount3    Price

Price4    Qty4       Amount4    Price

Qty1       Price1    Amount1    Qty

Qty2       Price2    Amount2    Qty

Qty3       Price3    Amount3    Qty

Qty4       Price4    Amount4    Qty

 

Code will be:

 

*LOOKUP SameModel

*DIM M:SomeDim=SomeDim.MULT //Get member ID stored in property MULT

*DIM MEASURES=PERIODIC //The default storage type of SameModel

*ENDLOOKUP

 

*XDIM_MEMBERSET MEASURES=PERIODIC //The default storage type of SameModel


*FOR %T%=Price,Qty //Or 2 loops - to prevent aggregation.


*WHEN SomeDim.TYPE

*IS %T%

*REC(EXPRESSION=%VALUE%*LOOKUP(M),SomeDim=SomeDim.RESULT)

*ENDWHEN


*NEXT

 

*FOR/NEXT Loops

 

In general long and nested *FOR/*NEXT loops have to be avoided due to terrible performance. In most cases instead of *FOR/NEXT loops some property can be created and used in the script code.

 

Using some value stored as property in calculations

 

Sometimes it looks as a good idea to store some value in a property and to use it in calculations. Actually it's a bad idea - you can't directly reference the property value in the expression, you have to use some %VAR% and long *FOR/*NEXT loop. Always store values in SIGNEDDATA, may be use some dummy members.

 

SIGN and ACCTYPE in EXPRESSION calculations

 

The calculations in default.lgf use different sign conversion logic with ACCTYPE then the script run by DM package. As a result the same script can produce different results as a default.lgf and as a script in DM package.

 

For default.lgf (BPC NW 10) all values read in the script scope are sign coverted based on ACCTYPE property and the result of EXPRESSION calculation is also sign converted based on ACCTYPE property of the target account:

 

SignedData_Result = if(Result.ACCTYPE=INC,LEQ, -1, 1) * Function(if(Argument1.ACCTYPE=INC,LEQ, -1, 1) * SignedData_Argument1, if(Argument2.ACCTYPE=INC,LEQ, -1, 1) * SignedData_Argument2, ...)

 

Example:

Dimension ACCOUNT: Members: A, B, C

 

ID   ACCTYPE

A    INC

B    EXP

C    INC

 

default.lgf

 

*WHEN ACCOUNT

*IS A

*REC(EXPRESSION=%VALUE%+[ACCOUNT].[B],ACCOUNT=C)

*ENDWHEN

 

The data sent by user in the input form will be:

 

A: 5

B: 10

 

This data will be stored as SIGNEDDATA:

 

A: -5

B: 10

 

Calculations:

 

(-1 * -5 + 1 * 10) * (-1) = -15 (SignedData_Result)

 

And on the input form:

 

C: 15

 

The same script launched by DM package (BPC NW 10) will not have any sign conversions, all calculations will be done with SGNEDDATA values:

 

-5 + 10 = 5

 

The result on the report:

 

C: -5

 

*DESTINATION_APP

 

If it's required to send data to the different model the *DESTINATION_APP statement can be used in default.lgf.

Sign conversion logic is also applicable to writing data using *DESTINATION_APP.

 

The same rules are applicable to *WHEN/*ENDWHEN loop after the *DESTINATION_APP (by the way, in BPC NW 10 *DESTINATION_APP statement is valid only to the next *WHEN/*ENDWHEN loop, have to be repeated before each *WHEN/*ENDWHEN sending data to other application (in BPC NW 7.5 all *WHEN/*ENDWHEN loops after single *DESTINATION_APP will write to target cube).

 

If some dimension is missing in the destination model *SKIP_DIM=SomeDim have to be used. But the issue can be in the following case:

 

SourceModel:

DimMissingInTarget: Member1, Member2, ..., MemberN (base) - having root parent All

SomeDim: Mem1, Mem2, ... - dimension in both Source and Target

 

TargetModel:

SomeDim: Mem1, Mem2, ... - dimension in both Source and Target

 

If some of Member1, Member2, ..., MemberN is changed in SourceModel the result of All have to be transferred to TargetModel

 

The code in default.lgf of SourceModel will be:

 

//some calculations in the SourceModel

...

 

*FOR %M%=Member1,Member2,...,MemberN //list of base members of the skipped dimension

 

*DESTINATION_APP=TargetModel

*SKIP_DIM=DimMissingInTarget

 

*WHEN DimMissingInTarget

*IS %M%

*WHEN SomeDim //SomeDim - dimension existing both in Source and Target

*IS Mem1,Mem2,... //some list of members of SomeDim changed by user and to be transferred to TargetModel

*REC(EXPRESSION=[DimMissingInTarget].[All]) //Parent All value is used!

*ENDWHEN

*ENDWHEN

 

*NEXT

 

N loops for N base members of DimMissingInTarget (useful for small N)

 

Another option for this particular case is to explicitely scope the scipped dimension with *XDIM_MEMBERSET:

 

*XDIM_MEMBERSET DimMissingInTarget=<ALL>

*DESTINATION_APP=TargetModel

*SKIP_DIM=DimMissingInTarget

 

*WHEN SomeDim //SomeDim - dimension existing both in Source and Target

*IS Mem1,Mem2,... //some list of members of SomeDim changed by user and to be transferred to TargetModel

*REC(EXPRESSION=%VALUE%)

*ENDWHEN

 

But in this case you have to put this code at the end of the default.lgf or restore original scope for DimMissingInTarget:

 

*XDIM_MEMBERSET DimMissingInTarget=%DimMissingInTarget_SET% // %xxx_SET% variable always contains the original script scope.

 

Custom Logic BADI in default.lgf

 

It's also possible to call Custom Logic BADI in default.lgf to perform some calculations that are not easy or even not possible to implement using script logic. The badi have to work with the current scope and can contain some fixed parameters.

 

Example:

 

//Some calculations before badi call

...

 

*START_BADI SOMEBADI

QUERY=ON //to get records from the current scope

WRITE=ON //to use default write to cube

DEBUG=OFF

SOMEPARAM=SOMEFIXEDVALUE

...

*END_BADI // Script scope will be reset to initial script scope here if changed before

 

//Some calculations after badi call

...

 

RUNLOGIC_PH BADI

 

It's also possible to use RUNLOGIC_PH BADI (How To Implement the RUNLOGIC_PH Keyword in SAP... | SCN) to speed up some calculations using CHANGED parameter. For example - single change of price have to recalculate values in multiple entities and multiple time periods.

 

*START_BADI RUNLOGIC_PH

QUERY=OFF

WRITE=ON

LOGIC = CALLED_LOGIC.LGF

APPSET = SameEnvironment

APP = SameModel

DIMENSION ENTITY=BAS(ALLENTITIES)

DIMENSION SomeDim=%Somedim_SET% //script initial scope

...

CHANGED=ENTITY

 

Write Back BADI instead of default.lgf

 

The same functionality can be achieved by Write Back BADI - perform calculations triggered by user input. The details are described here: Calculations in Write Back BADI - Default.lgf R... | SCN

The significant difference between Write Back BADI and default.lgf is that Write Back BADI will receive data sent by user before it's stored in the cube and only sent values will be processed.

 

B.R. Vadim

 

P.S. 2014.06.11 - incorrect case about function with "+/-" removed.

P.P.S. 2014.07.23 - sample for scope added

P.P.P.S. 2014.09.25 - *XDIM_FILTER functionality described

P.P.P.P.S. 2016.02.26 - effect of write back badi on the scope of default.lgf


Featured Content for SAP Business Planning and Consolidation for SAP NetWeaver

$
0
0

SAP BW Configuration for BPC NW

$
0
0

This configuration guide provides the information you need  to configure SAP BW for BPC NW.

The procedure is:

 

Step1: Transferring Application Components Hierarchy;

     1- To use this following options:

     Use the transaction code is RSA9 for navigation, integration with components SAP -> Data transfer to the to the SAP Business Information Warehouse -> Business Content DataSources -> Transfer Application Component Hierarchy.

     2- Choose Yes and then input the package name and request number.


Use EPM Add-in to Report on Top of HANA Views

$
0
0

EPM add in use MDX ( ODBO or XMLA) to connect to HANA views. Note: In any case no data entry is possible. For ODBO, HANA MDX provider have to be installed on local machine. For XMLA, you can use the XMLA connection available in EPM Add-in. After that, EPM add-in can connect to any analytic/ calculated view of HANA.

Recommandations related to back end connectivity and support:

•        BPC on Hana ( Standard Model) :

We don't recommand to report directly to Hana views using EPM add in connectivity.

Indeed , BPC standard model use case which is based on MDX syntax and MDX statements are generated by BPC consider the meta data, master data and hierarchy. What is used with MDX in BPC standard model   Measure calculation: PERIODIC, QTD, YTD  ( reporting relevant)   YTD and sign flip calculation based on account type (reporting relevant)   Member formula calculation (reporting relevant) So it means that reporting directly on HANA view by EPM Add in by using MDX connectivity on HANA, it’s bypassing these MDX calculations in BPC. So, in order to generate exact same reporting value for all three measures (PERIODIC, QTD and YTD), customer need to build his own logic for above first 3 points mentioned.

•        BFC on HANA :

We recommand to use EPM add in connectivity to report on Hana views.

Indeed, reporting use the FC Cube Designer HANA views which have been specifically deployed by Cube Designer to be browsed by a BI tool, so everything is consistent and ready for reporting.

BPC on HANA 10.1 Roadmap and Optimization Techniques

$
0
0

Hi All,

 

     Below steps/process and glimpses with an overview on new concepts of BPC 10.1 on HANA & its optimization for BW with HANA views - mixed scenarios.

 

     I have also explained with an example scenario.

 

     Slide1.JPG

 

 

    Slide3.JPG

Slide4.JPG

Slide5.JPG

Slide6.JPG

Slide7.JPG

Slide8.JPG

Slide9.JPG

Slide10.JPG

Slide11.JPG

Slide12.JPG

Slide13.JPG

Slide14.JPG

Slide15.JPG

Slide16.JPG

How To Use the BPC Mass User Management Tool SAP Business Planning and Consolidation 10.1 NW,Embedded Version

$
0
0

As most BPC security objects are not transportable using BPC Transport Framework, this guide will demonstrate how to proceed using a custom program.

SAP Solution : SAP Business Planning and Consolidation 10.1 NW , Embedded Version.

 

 

View Document

Viewing all 192 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>