Arriving at the Results by Comparing with Traditional Approach to My Approach in Deriving Function Points for ETL Operations
A. Rakesh Phanindra1, V. B. Narasimha2
1A. Rakesh Phanindra, Information Technology, Institute of Public Enterprise, Survey No. 1266, Shamirpet (V&M), Medchal, Malkajgiri district, Hyderabad (Telangana), India.
2Dr. V. B. Narasimha, Assistant Professor, Department of Computer Science and Engineering, University College of Engineering, Osmania University, Hyderabad (Telangana), India.
Manuscript received on 14 November 2022 | Revised Manuscript received on 25 December 2022 | Manuscript Accepted on 15 January 2023 | Manuscript published on 30 January 2023 | PP: 53-57 | Volume-11 Issue-5, January 2023 | Retrieval Number: 100.1/ijrte.E73570111523 | DOI: 10.35940/ijrte.E7357.0111523
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: It can be hard to guess how much data will need to be put into the data warehouse when the whole history of the transaction system is moved there. This is especially true when the transfer process could take weeks or even months. The ETL system’s parts must be broken down into its three independent stages, nevertheless, when estimating a big starting load. Data extraction from source systems, Creating the dimensional model from data, Loading the data warehouse and timing estimates for the extraction process. Surprisingly, data extraction from the source system may take up the majority of the ETL procedure. Online transaction processing (OLTP) systems are simply not built to return those massive data sets from the data warehouse’s historic load, which extracts a tremendous quantity of data in a single query. However, the daily incremental loads and the breath-of-life historic database loads are very different. In any case, fact-table filling requires data to be pulled in a different way than what transaction systems are able to do. ETL extraction procedures frequently call for time-consuming techniques like views, cursors, stored procedures, and correlated subqueries. It is essential to anticipate how long an extract will take to begin before it does. Calculating the extract time estimate is challenging. Due to the hardware mismatch between the test and production servers, estimates based on the execution of the ETL operations in the test environment may be greatly distorted. Sometimes working on certain projects where an extract task would run continuously and until it eventually failed, at which point it would be restarted and run once more until it failed. Without producing anything, days or even weeks passed. One must divide the extract process into two simpler steps in order to overcome the challenges of working with large amounts of data. Response time for queries. the interval between when the query is conducted and when the data starts to be returned. It is pertinent that effort arrival for ETL Operations for Data Marts and DWH projects in terms of Function Points which is a scientific way is essential. In the last paper, I have talked about general System Characteristics to arrive at Value Adjustment Factor. In this paper, I came up with results. I compared my findings with the conventional FPA on industrial projects in order to evaluate the Function Point Analysis’s suitability for Data Mart projects. I outline the strategy, implementation, and outcomes analysis of this validation in this section.
Keywords: Function Point Analysis, ETL, Data Marts.
Scope of the Article: Expert Approaches