Total Downloads: 34 , 1

Arriving at the Results by Comparing with Traditional Approach to My Approach in Deriving Function Points for ETL Operations
A. Rakesh Phanindra1, V. B. Narasimha2

1A. Rakesh Phanindra, Information Technology, Institute of Public Enterprise, Survey No. 1266, Shamirpet (V&M), Medchal, Malkajgiri district, Hyderabad (Telangana), India.
2Dr. V. B. Narasimha, Assistant Professor, Department of Computer Science and Engineering, University College of Engineering, Osmania University, Hyderabad (Telangana), India.
Manuscript received on 14 November 2022 | Revised Manuscript received on 25 December 2022 | Manuscript Accepted on 15 January 2023 | Manuscript published on 30 January 2023 | PP: 53-57 | Volume-11 Issue-5, January 2023 | Retrieval Number: 100.1/ijrte.E73570111523 | DOI: 10.35940/ijrte.E7357.0111523
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: It can be hard to guess how much data will need to be put into the data warehouse when the whole history of the transaction system is moved there. This is especially true when the transfer process could take weeks or even months. The ETL system’s components must be broken down into their three independent stages; nevertheless, when estimating a large starting load. Data extraction from source systems, creating the dimensional model from data, loading the data warehouse and timing estimates for the extraction process. Surprisingly, data extraction from the source system often accounts for the majority of the ETL procedure. Online transaction processing (OLTP) systems are not designed to return massive data sets from the data warehouse’s historical load, which extracts a tremendous quantity of data in a single query. However, the daily incremental loads and the breath-of-life historic database loads are very different. In any case, fact-table filling requires data to be pulled in a different way than what transaction systems can do. ETL extraction procedures often require time-consuming techniques such as views, cursors, stored procedures, and correlated subqueries. It is essential to anticipate how long an extract will take to begin before it does. Calculating the extract time estimate is challenging. Due to the hardware mismatch between the test and production servers, estimates based on the execution of the ETL operations in the test environment may be significantly distorted. Sometimes, working on specific projects where an extract task would run continuously until it eventually failed, at which point it would be restarted and run once more until it failed. Without producing anything, days or even weeks passed. One must divide the extraction process into two simpler steps to overcome the challenges of working with large amounts of data. Response time for queries. the interval between when the query is conducted and when the data starts to be returned. It is pertinent that the effort required for ETL Operations for Data Marts and DWH projects, in terms of function points, is essential. In the previous paper, I discussed general System Characteristics to determine the Value Adjustment Factor. In this paper, I present the results. I compared my findings with the conventional FPA on industrial projects to evaluate the Function Point Analysis’s suitability for Data Mart projects. I outline the strategy, implementation, and outcomes analysis of this validation in this section.

Keywords: Function Point Analysis, ETL, Data Marts.
Scope of the Article: Expert Approaches