UGC Approved Journal no 63975(19)
New UGC Peer-Reviewed Rules

ISSN: 2349-5162 | ESTD Year : 2014
Volume 12 | Issue 10 | October 2025

JETIREXPLORE- Search Thousands of research papers



WhatsApp Contact
Click Here

Published in:

Volume 9 Issue 4
April-2022
eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

7.95 impact factor calculated by Google scholar

Unique Identifier

Published Paper ID:
JETIR2204249


Registration ID:
400427

Page Number

c351-c360

Share This Article


Jetir RMS

Title

EFFICIENT PARALLEL PROCESSING ON DECISION TREES USING GPU

Abstract

Abstract: Decision trees trained in GPUs that followed batch processing overcame the insufficient memory to allocate the entire data by restricting the repeated computation of a split value in a node. This was achieved by implementation of the Node parallel processing method along with shared memory and synchronization among GPU cores. But this model suffered from a major load imbalance which leads to high communication cost. In order to minimize the imbalance in workload, a new model is proposed. The proposed model uses a histogram construction approach to compress the input data. This compression of data allows for faster transfer of data and less communication time between GPU cores. A Hybrid parallel architecture combining data and feature parallelism is proposed. Data parallelism is used in the compress function to compress the training data for computation. For split value calculation and entropy gain value calculation, Feature parallelism technique is used. This hybrid approach reduces the workload among worker nodes significantly thus leading to lower communication cost. The results of the proposed model with ijccn1 dataset indicate that it achieved a lower training time when compared with other parallel models. The evaluation of the proposed model against different datasets reflects the training performance is 10x speedup of the sequential model and 6x speedup of other models such as Sequential, OpenMP and OpenMPI models. The output of this phase can be used to determine the parameters which are significant in training the proposed model.

Key Words

Decision trees, Batch Processing, Histogram construction, Data parallelism, openMP, MPI

Cite This Article

"EFFICIENT PARALLEL PROCESSING ON DECISION TREES USING GPU", International Journal of Emerging Technologies and Innovative Research (www.jetir.org), ISSN:2349-5162, Vol.9, Issue 4, page no.c351-c360, April-2022, Available :http://www.jetir.org/papers/JETIR2204249.pdf

ISSN


2349-5162 | Impact Factor 7.95 Calculate by Google Scholar

An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Cite This Article

"EFFICIENT PARALLEL PROCESSING ON DECISION TREES USING GPU", International Journal of Emerging Technologies and Innovative Research (www.jetir.org | UGC and issn Approved), ISSN:2349-5162, Vol.9, Issue 4, page no. ppc351-c360, April-2022, Available at : http://www.jetir.org/papers/JETIR2204249.pdf

Publication Details

Published Paper ID: JETIR2204249
Registration ID: 400427
Published In: Volume 9 | Issue 4 | Year April-2022
DOI (Digital Object Identifier): http://doi.one/10.1729/Journal.29857
Page No: c351-c360
Country: Chennai, Tamil Nadu, India .
Area: Engineering
ISSN Number: 2349-5162
Publisher: IJ Publication


Preview This Article


Downlaod

Click here for Article Preview

Download PDF

Downloads

000535

Print This Page

Current Call For Paper

Jetir RMS