Quantcast
Channel: SCN : Document List - SAP Planning and Consolidation, version for SAP NetWeaver
Viewing all articles
Browse latest Browse all 192

Lessons Learned: SAP ECC Systems Consolidation

$
0
0

Lessons Learned: SAP Systems Consolidation

 

For: SAP Application Delivery Professionals

By: Abhishek Srivastava | Deloitte | Consulting | SAP Package Technologies

 

 

Key Takeaways/Preface


The consolidation of two more than decade old SAP environments is a complex undertaking that does not follow our standard methodology. There are various parameters that we need to rethink and design that we normally do not need in Greenfield SAP implementation.  In this article, we will discuss various lessons learned of two SAP landscape consolidations for C&IP Industry client where Parts business running on SAP R/3 4.7 was migrated to Appliances (a.k.a. Majors) business running on SAP ECC 6.0.

 

Ease on Design & Build but complex testing

Most of system/business consolidation efforts do not call for process reengineeringso design and build goes smooth by limiting to draw as-is process and define to-be for conflicting scenarios only. However, testing effort and complexity is increased multi-ford by ensuring that migrated process runs in new system (i.e. Integration testing) and it did not break anything in the process running in target system (i.e. Regression testing). One of the usual oversights one can do to avoid the need of requirement list with the argument that we are not doing business process reengineering but testing coverage can’t be assured without requirement tracking. 

 

Don’t underestimate Batch consolidation effort and testing

Our OTC processes are heavily dependent on batch schedule involving 3-tier batch management applications. Unlike Greenfield implementations where we define each batch job frequency, variants etc. from scratch, we may think to retrofit all jobs from one system to another as-is – this is a mistake!


One of the biggest learning we had that we must treat batch job consolidation same as greenfield implementation for the process that is being migrated from one system to another. You should map each and every migrated process (parts) and perform fit/gap against Majors system batch jobs and retrofit variants, schedule and frequency.


User Impact and training effort are minimal

Unlike Greenfield implementations, system consolidation does not change anything for end user except new system, few new transactions names and limited process changes where we had conflict between the two processes. We can live with just hand-outs rather than comprehensive classroom trainings.

 

Deployment Plan is the most interesting journey

The deployment plan complexity is multi-fold for system consolidation projects where we need to look for system cutover and business freeze of 2 different OTC business processes merging into one system. This directly translate to multi-billion $ company business freeze so it better be a fool-proof and self explanatory. The checks and balances need to be performed in all stages with controlled, throttled and multi-stage deployment. Each and every task and duration will be questioned for its worthiness and the most tempting question an Integration lead should ask everyone and him/herself that which of those cutover tasks can be done without business freeze.

 

All Communications are not same

Communication is the key to success for any project but system consolidation needs another level of communication layer for all impacted (Parts business) as well as non-impacted (Majors business) parties. Essentially, even Majors business is impacted in system consolidation effort but they don’t know until you approach them in all possible communication channels like walk-the-wall session covering new data elements, batch schedule changes, business freeze etc.


Further, we will discuss considerations across all technology areas and SDLC phases.

 

1.   System Landscape and Sync Considerations

 

This is comparatively landscape heavy engagement in terms of keeping 2 different production systems in-sync with project system landscape. We need to keep the production break-fix path intact but also have separate development and QA landscape during the course of consolidation project schedule.


The project system landscape should be updated regularly with production break-fix changes from both parts and majors production systems. You must not underestimate the manual effort for keeping project landscape in-sync with old Parts production system. Every change moving to retiring parts production system must be manually retrofitted to project landscape as they cannot be transferred via transport path due to mismatch in
SAP production version and level.

 

We mitigated this risk by introducing the change governance process; whosoever is adding change to production landscape, is responsible to retrofit same in project landscape as well. We also introduced hard-freeze after testing completion and only must-have changes/defects was addressed during the project
duration.

 

This is one of the examples and bare-minimum system landscape and sync that one would need for a consolidation project.

Landscape.png

 

2. Design Considerations

 

Unlike Greenfield implementation, where you study as-is and define to-be for entire business process framework, we limit scope defining as-is and defining to-be for conflicting scenarios only for ERP system consolidation project. This is not a process redesign but focused on how 2 different processes, code, configuration and data can co-exist in one system and seamlessly blend from process as well as system parameter perspective. For Example: How we can have Delivery due list running for both parts and Majors– do we have to keep DDL job variant and schedule entirely separate or Majors DDL variant and scheduled should be widened to cater the need of both processes?


Design should be emphasized on defining each of these:

  1. As-Is process and Requirement metrics for Parts Business Process
  2. As-Is process and Requirement metrics for Majors Business Process
  3. Define To-Be for conflicting processes (one has to adopt another OR entirely new process for both)

 

You must be wondering why I need to put humongous effort for requirement metrics when I am not changing processes much as part of consolidation. The answer lies in SDLC phase – Testing. Unless you have the requirement metrics for both Parts and Majors, you cannot ensure testing scenarios coverage, depth and breadth. This effort pays-off when you will be able to map your entire or critical process path to test script and see if they are really tested for all possible permutations and combinations.


In system consolidation projects, business are more interested to know what’s changing for them so walk-the-wall session helps where we can demonstrate the changes by region and process area.  For example, we can segregate each area in following buckets which provides high-level idea what could impact to their respective business responsibility areas and engage further accordingly.

 

Design.png

 

3.   Build Considerations

 

3.1 Scope Baseline

One of the biggest challenges that we face is locking down custom code migration scope from Statement of Work (a.k.a. SoW) as well as functionality perspective. Please note, here we are talking about retiring 15 years old ERP Parts system and migrate every usable(not just active) piece of byte to another more than decade old (but latest version and support pack) ERP system. Our finding revealed that at least 40% of code and configuration is obsolete and not used anymore in the retiring applications. We used 2 different tools for scope baseline:

  1. A well-renowned tool by 3rd party
  2. Another in-house Upgrade tool

 

Both the tools had almost same results which gave us baseline to start our scoping, effort, and timeline. However none of them tells you which all custom code repository is still active but not used anymore from process standpoint. This means, we need another layer of scope filter by tagging each and every custom code like user-exit, report, programs, workflow etc. to current process and rule-out the ones which are not tied to process anymore.  This is troublesome but required for any system consolidation engagement.

 

3.2 Code Retrofit

Unlike Greenfield implementation, we categorize objects in 3 broad categories for code consolidation of ERP systems:

 


Category


Definitions


Effort


Port Objects


No conflict between processes running
  in 2 systems and they are purely lift & shift


~30% of standard effort including
  documentation and unit testing


Leverage Objects


Conflict between processes running 2  systems as they reside in both systems with same names


~50% of standard effort including
  documentation and unit testing


Test Relevant Objects


No build action required, already available in  target system


~10% of standard effort as only
  testing effort is required


Here Leverage object retrofit is the key to success as they are the ones which are prone to error and issues and impacting both sides of processes if not retrofitted carefully. They need to co-exist with same name and address both parts and Majors process as-is. Specifically for leverage objects, we must have unique global filter to segregate the code execution (like Sales Org, Sales Area, document types, User role etc.) so certain code elements are executed for
required process chains.

 

We should be careful about categorizing leverage objects and most of the leverage objects can be very well changed to port if migrated with different names. This may have user training impact but it is still a recommended approach to minimize the number of leverage objects as each leverage object introduce the risk to overall business operations. 


3.3 Configuration Retrofit

There will always be a laundry list of items that were not considered in the initial effort estimation. We experienced that several IMG configuration node/values were changed as part of consolidation due to conflict with target system. This means all custom code repository need to be scanned for hardcoding or TVARVC and replace with new values. Some of the IMG configurations values that are likely to be changed as part of system consolidations are:

 


Document Text IDs


Pricing Procedure


Line Item Category


Line Item Cat Group


Document Types


Pricing tables


Pricing conditions


Custom tables names


Delivery block


Billing block


Plant IDs


Order Reason


Channels


Status Codes


etc…

 

The last piece of puzzle is table indexes retrofit, over the time support team may have created hundreds of DB indexes but all of them may not be used anymore or they are already covered partial or fully by the indexes available in target system. A comprehensive analysis is required before moving DB indexes otherwise it may have adverse effect to DB size and performance.

 

 

4.Data Load Considerations

 

We have different kind of business data that we are dealing here like Master Data, supporting data objects like Sales Area, Inventory, Materials Substitutions, Purchase Info Records, Routes, Inclusion/exclusion, Open transaction records and historical records; and their migrations need to be happen in multiple stages.

 

Let’s categorize data objects in different phase and methods of sync. I will limit the discussion how we mitigated the risk by deploying them in multiple stages
and you can adapt different strategy based on complexity of your consolidation engagement.

 


Phase


Data  Object


Sync Frequency


Sync Schedule


Sync Method(s)


1


Sales Area and Master Data


Recurring till Go-Live


At end of UAT


ALE


2


Inventory, Materials Subs, PIRs,
  Incl/Excl. etc.


Recurring till Go-Live


At end of UAT


LSMW, Custom Conversion


3


Open Transaction Data


– Sales Order


Initial, once


Business Go-Live


LSMW, BAPI, Custom Conversion, 3rd
  party tool


4


Historical Records


Initial, once


Business Go-Live


BW, RFC

 

 

 

4.1     Data Reconciliation

The key of the successful data transfer lies with the data load dress rehearsals and ability to reconcile between source and destination. The reconciliation can be multi-fold at header summary level as well as line item levels. For example – Open Sales Order migration reconciliation need to be done at header summary levels – Net Value of all migrated orders, total number of line items and total units at line items, and further drill down to line item level reconciliation where we must compare the key attributes like item status, profit segment, shipping point, blocks etc. between source and target systems.

 

We must develop reconciliation procedure of each and every data objects and special skilled task force is required for Inventory and opens sales order migrations. Specifically Inventory reconciliation need to be done at quantity as well as accounting levels. It is more challenging if we have shared materials between two systems (like accessories) where we will have to wait till Go-Live freeze window to sync inventory of such materials. For rest of the reconciliation mechanism, it could be as easy as simple extract from both systems followed by excel vLookup or MSAccess database queries, custom programs etc. You must account for utilities development and effort for data reconciliation and staff adequate number of resources in data steward team.

 

 

We adopted following phases for defining data reconciliation mechanism and sign-offs for each data structure.

Data.png

 

 

4.2     Historical Data Management

We must ask the business reasons for historical transaction data availability in the target system. Most of the time, it is limited to return/claims cross-reference, display of transaction on need basis, or legal obligation. In our project, we had use case on later 2 and we provided the separate link to display historical transaction data on websites for consumer/distributors from retired system rather than bringing them in the target system. This is above and beyond of BW historical data availability that is already available for end users.

 

 

 

5.  Batch Setup Considerations

 

Batch is one of the most critical pieces for ERP consolidation project. Batch setup involves retrofit and reconciliation of multiple elements like:

 

1. Batch Scope Identification: There could be many ways to identify the batch migration scope but we followed what was already running in legacy SAP 4.7 parts system. We pulled all the jobs that ran in last 90 days, removed the duplicates and that gave us the scope baseline. Further, those jobs need to be aligned to business process as you never know what all jobs running without any use for more than a decade old system. We ruled out around 15% jobs by mapping each job to business process and removed duplicate of housekeeping jobs that were
already running in target system.

 

2. SAP Variant retrofit: We need to be really creative on this. Over the time, client may have thousands of batch variants belonging to multiple programs so it is not easy to redefine each. We developed a custom utility to pull all the variants of all in-scope batch jobs programs from parts to majors system as-is and further aligned them with Majors variants with due-diligence by process area. For example – DDL jobs can be very well merged into one if their variants are mapped correctly. We faced several challenges on variant migrations and learned hard way that utility must consider following scenarios:

 

2.1 Dynamic variants (Date/Time/User)– Dynamic parameters cannot be transferred due to mismatch in SAP version. You will have to identify and retrofit them manually.

2.2 File path changes– any file based interface need to refer directory of target system

2.3 Conflicting variants– Add 2 character suffix for conflicting variants if length is lessthan or equal to 12 otherwise those need to be retrofitted manually due to limitation of 14 character variant length limitation

2.4 Multi-tab variants– Selection screen filters running in multiple tabs has to be retrofitted manually (Example – Delivery due list Job program)

 

  

3. SAP Job definitions: Program and variants have been migrated so the next step comes to define the jobs in SAP with right dependencies and list of steps. We developed a custom utility to pull all the job definitions of all in-scope batch jobs programs from parts to majors system as-is and further aligned them with Majors job definitions with due-diligence by process area.

 

4. Job Scheduling: We used a well-renowned 3rd party tool because of heterogeneous but dependent system landscape. We had requirement to trigger the non-sap system job once sap job finishes so used external tool but if you don’t have such complexity, you can define the schedule and frequency within SAP system itself. Here is the snapshot of how we aligned Parts and Majors critical path for batch.

 

Batch.png

 

 

6.  Testing Considerations

 

This is the most important phase of a consolidation project. As mentioned earlier, the testing scope is double as we not only need to test what processes we are migrating to one system to another but also need equal focus that we don’t break anything in processes running in target system. Let’s discuss some of the learnings we had during testing stage.

 

6.1   Integration & Regression Testing

Like Greenfield project, we must have requirement metrics to document the test scenarios and variations. One of the usual oversights one can do to avoid the need of requirement metrics with the argument that we are not doing business process reengineering but testing coverage can’t be assured without requirement tracking.

 

Further, testing critical path lies with:

 

  1. Leverage categorized code (see Build Considerations in section 3.2) objects must be translated to business process followed by test scripts that covers both sides of the process scenarios. Each and every Leverage objects test scenarios should be drafted for positive and negative test cases for Parts vs. Majors business. Here positive for one may or may not act as negative test case for other side of business and it completely dependent on how you migrated and blended the two processes in same system.
  2. All conflicting scenarios that have been redesigned to cater the requirement for both side of the business; must be tested with all variations of master data. Essentially, this is the change we are bringing for client business and we must ensure that changes are appropriately addressed, accepted and signed-off from both parts and majors side of the business.
  3. All interface connections with non-sap, external systems, website etc. must be tested for all scenarios as most of these connections may require the redefinition of extractors, filters, URLs, new network port etc. You will never be assured of what changes are required (and on which side of the interface connection) unless you test them for all variations.  For example, CRM middleware adapter objects must be adjusted so it pulls/push data as before rather than opening the wider channels and impacting call center operations.

 

6.2   Batch Testing

This is another critical piece of testing puzzle. When we migrate over thousands of variants, batch definitions and schedule from one system to another Live
system, we bring a great deal of uncertainty on how they are going to blend and work together. Based on complexity of your business, you may have from few hundred to multiple thousands of batch jobs running in both source and target systems. You may not be able to test each and every variations of job but you must focus on critical OTC process runs that are heavily dependent on batch jobs. 

 

Batch testing can very well co-exist with performance testing landscape so you createtrue production like environment for your performance testing by running consolidated, cleansed batch and let Orders flow through ship-bill-accounting cycle. It is worth to mention that those non-critical jobs is followed by
critical ones, also becomes critical for your testing. For example – Credit old removal job is essential for critical delivery due list runs. Here is the napshot of how we tracked the batch schedule baseline and testing of critical path.

 

BatchTest.png

 

 

7. Deployment Plan Considerations

 

The deployment plan complexity is multi-fold for system consolidation projects where we need to look for system cutover and business freeze of 2 different OTC business processes merging into one system. This directly translate to multi-billion $ company business freeze so it better be a fool-proof and self-explanatory. The checks and balances need to be performed in all stages with controlled, throttled and multi-stage deployment. Each and every task and duration will be questioned for its worthiness and the most tempting question an Integration lead should ask everyone and him/herself that which of those tasks can be done without business freeze. Let’s discuss the few components of deployment plan:

 

7.1   Cutover Plan – System and Business Cutover

The primary goal of any cutover plan is to make the system business-ready with minimal business operation impact. We recommend having staggered 5 stage deployment specifically for consolidation projects. This helps us minimize business impact, mitigate and stagger risk and enhanced focus on key areas especially when we are dealing with 2 different ends of business.

 

Cutover.png

 

Stage 1 – Cutover Preparation: This is the stage where we perform activities that do not impact business, i.e. mostly out of system activities like master data cleansing, cutover logistics, socialization, finalize of transport and sequence etc. This stage runs on standard 8 hours calendar.

 

Stage 2 – Technical Go-Live: We migrate all code, configurations, indexes and manual configuration in this stage. This stage should occur at least couple of weeks earlier before Business Go-Live and it give us enough time for next stage of data load and batch setup. This stage also gives us benefit of dealing with Majors business issues upfront that may have been slipped through our regression testing and stabilizes before parts business is live in the target system. This stage runs on 24x7 calendar and require Majors business freeze and system back-up for rollback point.

 

Stage 3 Parts Data Load/Batch Setup: This stage runs in parallel to Majors stabilization phase and takes comparatively longest time. You must find a right slot for data load as you will be loading parts data in already live Majors business system – usually post online business hours and weekends. You can load everything except Open sales order and Inventory as these 2 data objects will keep changing every minute in source system. This stage runs on available slot based calendar.

 

Stage 4 – Business Go-Live: This is the final stage of deploymentwhen we freeze both Parts and Majors ends of the business for clear rollback point in case of any catastrophe. We focus more on clearing the warehouse pipeline, finance books closure of the source retiring system followed by open data object load. We can go lot deeper on this stage but this is the moment when we flip switch and retire the Parts ERP system.  All data are queued up at PI, EDI and other middleware ends during this stage and we must be careful before opening the floodgate by introducing controlled and throttled data flow.

 

Stage 5 – Stabilization– It is self-explanatory.

 

The key of any cutover plan success lies in the dress rehearsals. The more you rehearse, the more ready you are. We hadmultiple dress-rehearsal of each and every activity (including smoke testing) planned for Go-Live stages 2 and 4. 

 

8.  Communication Plan Considerations

 

As mentioned earlier, all communication plans are not same. We have to adapt different methods and procedures to suit the situation and audience. For example – a kick-off must be planned with wider client team, all 3rd parties and project team before we start any stage like cutover dress rehearsal, Go-Live cutover, testing etc. Similarly, corporate intranet post needs to be shared if we need to communicate with entire organization. 

 

We adopted following communication methods and this is not just related to systemconsolidation projects but can be adapted to any project based on the nature and audience. 

 

Comm.png

 

 

9.Governance

 

Governance is the key to successful migration of ERP consolidation project. It is more important in this kind of engagement because you don’t want to deal with humongous, unaccounted changes while you are shifting its core to entirely new system. I will split the governance into following 2 categories:

 

 

9.1   Data Governance

Data governance is absolutely required and it must be driven by project phase and change severity levels. For example, we must stop placing all International orders couple of weeks before because of pick/pack/ship lead time. Similarly, master data freeze should start from the UAT end as you will be migrating master data to target system by this stage.

 

 

9.2   Technical changes Governance

As mentioned in section 1.0, all technical code or configuration changes need to be manually retrofitted from retiring parts production landscape to project landscape. We must draw a line on the break-fixes/changes moving to production from the start of the project test phase as every new break-fix retrofit will warrant repeating the testing of all the scenarios that are impacted by the break-fix changes. The exponential effort around testing, manual retrofit and associated risks may not be worth for the changes unless it is must have severity 1 issue. We introduced a toll-gate of every changes moving to retiring production system and ensured only severity 1 is permitted and person responsible explains the changes to project and testing team so relevant scenarios can be tested/re-tested in consolidated environment.

 

 

 

10.Hyper-care/ Post Go-Live Considerations

 

Hyper-care, as the name states that utmost care and we need extra pair of eyes and monitoring in all possible system and business dimensions. Business issues will be anyways addressed through stabilization channels and I would like to discuss more on the topics which doesn’t seem important until they start creating problems.

 

10.1   Operational Matrices

One of the questions we get from organization CXX level that how do we ensure that migration did not impact business throughput? The only way to defeat this concern that we start tracing the operational metrics before and after business Go-Live and publish the average Order, deliveries and billing net value against 4 weeks window. Some of the operational metrics that we gathered and reported on:

 

  1. How many order created everyday and what are order intake channels of each
  2. How many order have been converted to delivery and PGI’d
  3. How many units have been shipped
  4. How many deliveries have been converted to billing
  5. etc.

op.png

 

 

10.2  System Monitoring

System monitoring becomes essential as most of the time we react to it rather than taking proactive monitoring and correction approach. There are various parameters that can be tracked in regular intervals like CPU Utilization, memory consumption, work process availability, central instance health. We tracked following every 2 hours-

 

sys.png

 

Note:- These practices worked well for my project and it may not be true in your case. Please use your own discretion while applying any of the lessons learned for your project.


Viewing all articles
Browse latest Browse all 192

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>