Monday, June 3, 2019

Data Conversion and Migration Strategy

info Conversion and Migration Strategy1. selective information Conversion Migration StrategyThe scope of this section is to define the info migration strategy from a CRM perspective. By its precise nature, CRM is non a wholesale replacement of legacy systems with BSC CRM but rather the coordination and management of customer moveion within the existing application landscape. in that locationfore a large scale entropy migration in the traditional sense is not required, only a select few info entities give claim to be migrated into BSC CRM. selective information migration is typic tout ensembley a one-off activity prior to go- rest. Any ongoing information points required on a frequent or ad-hoc rear ar considered to be interfaces, and argon not part of the selective information migration scope.This section outlines how STEE-Infosoft intends to manage the info migration from the CAMS and HPSM legacy systems to the BSC CRM system.STEE-InfoSoft go forth provide a comp rehensive selective information novelty and migration solution to migrate the current legacy informationbases of CAMS and HPSM. The solution would adopt the approximately suitable and appropriate technology for selective informationbase migration, using our proven methodology and professional expertise. STEE-InfoSofts selective information migration methodology assures customers the quality, consistency, and accuracy of results. Table 11 shows STEE-InfoSoft info migration values proposition using our methodology.Table 11 STEE-Infosoft information migration values proposition appreciateDetailsCost EffectiveSTEE-InfoSoft adopts a cost-effective data migration solution. Minimal downtime freighter be striked for the data migration. Extensive use of automation speed up work and makes post-run changes and castigateions practical. Error tracking and correction capabilities help to avoid ingeminateed variation re-runs. Customization enables getting the job done the correct way r attling Short DowntimeDowntime is minimized because most of the migration operatees are external to the tally application system, and do not affect its normal workflow. It further reduces downtime by allowing the data changeover to be performed in stages.Assured data IntegrityScripts and programs are automatically generated for later use when examination and validating the data.Control Over the Migration Process.Creating unique ETL (Extract, metamorphose and Load) scripts to run the extract and load processes in order to reduce the downtime of the existing systems. Merging fields, filtering, splitting data, changing field definitions and translating the field content. Addition, Deletion, Transformation, and Aggregation, Validation rules for cleansing data.1.1. entropy Migration Overview information migration is the transfer of data from one location, storage medium, or hardware/software system to another. Migration efforts are often prompted by the need for upgrades in expert infrastructure or changes in line of reasoning requirements Best practices in data migration recommends two principles which are inherent for successful data migration consummate data migration as a project dedicated to the unique objective of establishing a new (target) data store.Perform data migration in quartette primary phases data Migration Planning, Data Migration Analysis and Design, and Data Migration Implementation, and Data Migration Closeout as shown in 1.1. In addition, successful data migration projects were ones that maximized opportunities and mitigated risks. The following critical success factors were identified Perform data migration as an independent project. Establish and manage expectations throughout the process. Understand current and future data and business requirements. Identify individuals with expertise regarding legacy data. Collect available documentation regarding legacy system(s). Define data migration project roles responsibilities clearly . Perform a comprehensive overview of data content, quality, and structure. Coordinate with business owners and stakeholders to determine importance of business data and data quality.1.2. STEE-Info Data Migration Project LifecycleTable 12 lists the high- take processes for each phase of the STEE-Info Data Migration Project Lifecycle.While all data migration projects follow the four phases in the Data Migration Project Lifecycle, the high-level and low-level processes may vary depending on the size, scope and complexity of each migration project. Therefore, the following information should serve as a guideline for developing, evaluating, and implementing data migration efforts. Each high-level and low-level process should be included in a DataMigrationPlan. For those processes not deemed appropriate, a justification for extrusion should be documented in the DataMigrationPlan.Table 12 Data Migration Project Lifecycle with high-level tasks identified.Data Migration Planning phaseDa ta Migration Analysis Design PhaseData Migration Implementation PhaseData Migration Closeout Phase Plan Data Migration ProjectAnalyze Assessment ResultsDevelop ProceduresDocument Data Migration Results Determine Data Migration RequirementsDefine Security ControlsStage DataDocument Lessons Learned Assess Current EnvironmentDesign Data EnvironmentCleanse DataPerform Knowledge Transfer Develop Data Migration Plan Design Migration Procedures Convert Transform Data (as needed) Communicate Data Migration Results Define and Assign Team Roles and Responsibilities corroborate Data QualityMigrate Data (trial/deployment) Validate Migration Results (iterative) Validate Post-migration Results During the lifecycle of a data migration project, the team moves the data through the activities shown in 1.2The team will repeat these data management activities as needed to find a successful data load to the new target data store.1.3. Data Migration Guiding Principles1.3.1. Data Migration Approach1.3. 1.1. Master Data (e.g. Customers, Assets) The approach is that master data will be migrated into CRM providing these conditions hold The application where the data resides is existence replaced by CRM. The master records are required to support CRM functionality post-go-live. There is a key operational, reporting or legal/statutory requirement. The master data is current (e.g. records marked for deletion need not be migrated) OR is required to support another migration. The legacy data is of a sufficient quality such so as not to adversely affect the daily running of the CRM system OR will be cleansed by the business/enhanced sufficiently within the data migration process to meet this requirement. Note Where the master data resides in an application that is not being replaced by CRM, but is required by CRM to support specific functionality, the data will not be migrated but accessed from CRM using a dynamic query look-up. A dynamic query look-up is a real-time query accessing t he data in the source application as and when it is required. The advantages of this approach are Avoids the duplication of data throughout the system landscape. Avoids data within CRM becoming out-of-date. Avoids the development and running of frequent interfaces to update the data within CRM. Reduces the quantity of data within the CRM systems. 1.3.1.2. Open Transactional data (e.g. Service Tickets) The approach is that open transactional data will NOT be migrated to CRM unless ALL these conditions are met There is a key operational, reporting or legal/statutory requirement The legacy system is to be decommissioned as a result of the BSC CRM project in timescales that would continue a run down of open items The parallel run down of open items within the legacy system is impractical due to operational, timing or resource constraints The CRM build and structures permit a correct and ordered interpretation of legacy system items aboard CRM-generated items The business owner is able to commit resources to own data rapprochement and sign-off at a detailed level in a timely sort across multiple project phases1.3.1.3. Historical Master and Transactional dataThe approach is that historical data will not be migrated unless ALL these conditions are met There is a key operational, reporting or legal/statutory requirement that cannot be met by using the remaining system The legacy system is to be decommissioned as a direct result of the BSC CRM project within the BSC CRM project timeline An archiving solution could not meet requirements The CRM build and structures permit a correct and consistent interpretation of legacy system items alongside CRM-generated items The business owner is able to commit resources to own data reconciliation and sign-off at a detailed level in a timely manner across multiple project phases1.3.2. Data Migration Testing CyclesIn order to test and verify the migration process it is proposed that there will be three testing cycles before the f inal live load Trial Load 1 Unit testing of the extract and load routines. Trial Load 2 The first test of the complete end-to-end data migration process for each data entity. The main purpose of this load is to watch the extract routines work correctly, the staging area transformation is correct, and the load routines can load the data successfully into CRM. The various data entities will not inescapably be loaded in the same sequence as will be done during the live cutover Trial Cutover a complete rehearsal of the live data migration process. The execution will be done using the cutover plan in order to validate that the plan is reasonable and possible to complete in the agreed timescale. A final set of cleansing actions will screw out of trial cutover (for any records which failed during the migration because of data quality issues). There will be at least one trial cutover. For complex, high-risk, migrations several trial runs may be performed, until the result is entirely s atisfactory and 100% correct. Live Cutover the execution of all tasks required to prepare BSC CRM for the go-live of a particular release. A large majority of these tasks will be related to data migration. 1.3.3. Data Cleansing before data can be successfully migrated it data needs to be clean, data cleansing is therefore an important element of any data migration activity Data needs to be in a consistent, standardised and correctly formatted to allow successful migration into CRM (e.g. CRM holds addresses as structured addresses, whereas some legacy systems might hold this data in a freeform format) Data needs to be complete, to ensure that upon migration, all fields which are mandatory in CRM are populated. Any fields flagged as mandatory, which are left blank, will cause the migration to fail. Data needs to be de-duplicated and be of sufficient quality to allow efficient and correct support of the delimit business processes. Duplicate records can either be marked for deletio n at source (preferred option), or should be excluded in the extract/conversion process. bequest data fields could have been misused (holding information different from what this field was initially intended to be used for). Data cleansing should pick this up, and a decision needs to be made whether this data should be excluded (i.e. not migrated), or transferred into a more appropriate field.It is the responsibility of the data owner (i.e. MOM) to ensure the data provided to the STEE-Info for migration into BSC CRM (whether this is from a legacy source or a template populated specifically for the BSC CRM) is accurate.Data cleansing should, wherever possible, be done at source, i.e. in the legacy systems, for the following reasons Unless a data change freeze is put in place, extracted datasets become out of date as soon as they have been extracted, due to updates taking place in the source system. When re-extracting the data at a later date to get the most recent updates, data clea nsing actions will get overwritten. Therefore cleansing will have to be repeated each time a new dataset is extracted. In most cases, this is impractical and requires a large effort. Data cleansing is typically a business activity. Therefore, cleansing in the actual legacy system has the advantage that business people already have access to the legacy system, and are also familiar with the application. Something that is not the case when data is stored in staging areas. In authorized cases it may be possible to develop a programme to do a certain tip of automated cleansing although this adds additional risk of data faultings. If data cleansing is done at source, each time a new (i.e. more recent) extract is taken, the results of the latest cleansing actions will automatically come across in the extract without additional effort.1.3.4. Pre-Migration TestingTesting breaks down into two core subject areas logical errors and physical errors. Physical errors are typically syntactica l in nature and can be easily identified and resolved. Physical errors have nothing to do with the quality of the mapping effort. Rather, this level of testing is dealing with semantics of the scripting language used in the transformation effort. Testing is where we identify and resolve logical errors. The first step is to execute the mapping. Even if the mapping is ideal successfully, we must still ask questions such as How many records did we expect this script to create? Did the correct number of records get created? Has the data been loaded into the correct fields? Has the data been formatted correctly? The fact is that data mapping often does not make sense to most people until they can physically interact with the new, populated data structures. Frequently, this is where the majority of transformation and mapping requirements will be discovered. Most people simply do not realize they have bewildered something until it is not there anymore. For this reason, it is critical to unleash them upon the populated target data structures as soon as possible. The data migration testing phase must be reached as soon as possible to ensure that it occurs prior to the design and building phases of the core project. Otherwise, months of development effort can be lost as each additional migration requirement slowly but surely wreaks havoc on the data model. This, in turn, requires substantive modifications to the applications built upon the data model.1.3.5. Migration ValidationBefore the migration could be considered a success, one critical step remains to validate the post-migration environment and confirm that all expectations have been met prior to committing. At a minimum, meshwork access, file permissions, directory structure, and database/applications need to be validated, which is often done via non-production testing. Another good strategy to validate software migration is to benchmark the way business functions pre-migration and then canvass that benchm ark to the behaviour after migration. The most effective way to collect benchmark measurements is collecting and analyzing Quality Metrics for various Business Areas and their corresponding affairs.1.3.6. Data Conversion ProcessMapped information and data conversion program will be put into use during this period. Duration and timeframe of this process will depend on Amount of data to be migrated Number of legacy system to be migrated Resources limitation such as server performance Error which were churned out by this processThe conversion error management approach aims to reject all records containing a serious error as soon as possible during the conversion approach. Correction facilities are provided during the conversion where possible, these will use the existing amendment interface. Errors can be classified as follows Fatal errors which are so serious that they prevent the account from being loaded onto the database. These will include errors that cause a breach of database integrity such as duplicate primary keys or remove foreign key references. These errors will be the focus of data cleansing both before and during the conversion. Attempts to correct errors without user interaction are usually futile. Non-fatal errors which are less serious. Load the affected error onto the database, still containing the error, and the error will be communicated to the user via a work management item attached to the record. The error will then be corrected with information from user. Auto-corrected errors for which the offending data item is replaced by a previously agreed value by the conversion modules. This is done before the conversion process starts together with user to determine values which need to be updated.One of the important tasks in the process of data conversion is data validation. Data validation in a broad sense includes the checking of the translation process per se or checking the information to see to what degree the conversion process is an i nformation preserving mapping.Some of the common verification methods used will be Financial verifications (verifying pre- to post-conversion totals for key financial values, verify subsidiary to widely distributed ledger totals) to be conducted centrally in the presence of accounts, audit, compliance risk management Mandatory exceptions verifications and rectifications (on those exceptions that must be resolved to avoid production problems) to be reviewed centrally but branches to execute and confirm rectifications, again, in the presence of network management, audit, compliance risk management Detailed verifications (where full details are printed and the users will need to do random detailed verifications with legacy system data) to be conducted at branches with final confirmation sign-off by branch deployment and branch manager and electronic files matching (matching field by field or record by record) using pre-defined files.1.4. Data Migration MethodThe primary method of transferring data from a legacy system into Siebel CRM is through Siebel Enterprise Integration Manager (EIM). This facility enables bidirectional exchange of data between non Siebel database and Siebel database. It is a server ingredient in the Siebel eAI component group that transfers data between the Siebel database and other corporate data sources. This exchange of information is accomplished through intermediary tables called EIM tables. The EIM tables act as a staging area between the Siebel application database and other data sources. The following figure illustrates how data from HPSM, CAMS, and IA databases will be migrated to Siebel CRM database.1.5. Data Conversion and Migration ScheduleFollowing is proposed data conversion and migration schedule to migrate HPMS and CAMS, and IA databases into Siebel CRM database. 1.6. Risks and Assumptions1.6.1. RisksMOM may not be able to confidently reconcile large and/or complex data sets. Since the data migration will need to be re conciled a minimum of 3 times (system test, trial cutover and live cutover) the effort required within the business to comprehensively test the migrated data set is significant. In addition, technical data loading constraints during cutover may mean a limited time window is available for reconciliation tasks (e.g. overnight or during weekends) MOM may not be able to comprehensively cleanse the legacy data in line with the BSC CRM project timescales. Since the migration to BSC CRM may be dependent on a number of cleansing activities to be carried out in the legacy systems, the effort required within the business to achieve this will increase proportionately with the volume of data migrated. Failure to complete this exercise in the required timescale may result in data being unable to be migrated into BSC CRM in time for the planned cutover.The volume of data errors in the live system may be increased if reconciliation is not completed to the required standard. The larger/more complex a migration becomes, the more likely it is that anomalies will occur. Some of these may initially go undetected. In the beat case such data issues can lead to a business and project overhead in rectifying the errors after the event. In the worst case this can lead to a business operating on inaccurate data.The more data migrated into BSC CRM makes the cutover more complex and lengthy resulting in an increased risk of not being able to complete the migration task on time. Any further resource or technical constraints can add to this risk. Due to the volume of the task, data migration can divert project and business resources away from key activities such as initial system build, functional testing and user acceptance testing.1.6.2. AssumptionsData Access Access to the data held within the CAMS, HPSM and IA applications are required to enable data profiling, the identification of data sources and to write functional and technical specifications.Access connectedness is required to HPMS and CAMS, and IA databases to enable execution of data migrations scripts.MOM is to provide workstations to run ETL scripts for the data migration of HPMS and CAMS, and IA databases.There must not be any schema changes on legacy HPMS and CAMS, and IA databases during data migration phase.MOM is to provide sample of production data for testing the developed ETL scripts.MOM business resource availability Required to look in data profiling, the identification of data sources and to create functional and technical specifications. Required to develop and run data extracts from the CAMS HPSM systems. Required to validate/reconcile/sign-off data loads. Required for data cleansing. Data cleansing of source data is the responsibility of MOM. STEE-Info will help identify the data anomalies during the data migration process however STEE-Info will not cleanse the data in the CAMS HPSM applications. Depending on the data quality, data cleansing can require considerable effort, and invol ve a large tally of resources. The scope of the data migration requirements has not yet been finalised, as data objects are identified they will be added on to the data object register.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.