UGC Approved Journal no 63975(19)
New UGC Peer-Reviewed Rules

ISSN: 2349-5162 | ESTD Year : 2014
Volume 12 | Issue 10 | October 2025

JETIREXPLORE- Search Thousands of research papers



WhatsApp Contact
Click Here

Published in:

Volume 11 Issue 2
February-2024
eISSN: 2349-5162

UGC and ISSN approved 7.95 impact factor UGC Approved Journal no 63975

7.95 impact factor calculated by Google scholar

Unique Identifier

Published Paper ID:
JETIR2402116


Registration ID:
532519

Page Number

b148-b153

Share This Article


Jetir RMS

Title

DIFFERENTIAL PRIVACY TECHNIQUES IN MACHINE LEARNING FOR ENHANCED PRIVACY PRESERVATION

Abstract

The increasing reliance on machine learning models has prompted growing concerns regarding the privacy of sensitive information used in the training process. As a result, using differential privacy techniques has become a viable paradigm for attaining strong privacy preservation without sacrificing the models' usefulness. This study investigates and summarises important approaches in the field of machine learning differential privacy. Adding controlled noise at various points in the machine learning pipeline is the first class of approaches. To avoid unintentionally revealing private information, Laplace and Gaussian noise are deliberately added to training data, predictions, and model parameters. By employing strategies like randomised response mechanisms, data perturbation can enhance privacy for each individual without compromising the model's quality. Collaborative model training is made easier by privacy-preserving aggregation techniques like Secure Multi-Party Computation (SMPC), which protects raw data. By adding noise to gradients during training, Differential Privacy Stochastic Gradient Descent (DP-SGD) provides privacy guarantees during the optimization stage. When differential privacy is combined with federated learning, it allows for decentralized model training across devices while maintaining the security and localization of sensitive data. By allowing computations on encrypted data or safely aggregating model updates, advanced cryptographic approaches like homomorphic encryption and secure aggregation protocols give another degree of privacy. When taken as a whole, these methods add to a thorough framework for machine learning differential privacy that strikes a balance between the need to protect individual privacy and the drive to create accurate models.

Key Words

Differential Privacy, Noise, SGD, Federated Learning, Machine Learning

Cite This Article

"DIFFERENTIAL PRIVACY TECHNIQUES IN MACHINE LEARNING FOR ENHANCED PRIVACY PRESERVATION", International Journal of Emerging Technologies and Innovative Research (www.jetir.org), ISSN:2349-5162, Vol.11, Issue 2, page no.b148-b153, February-2024, Available :http://www.jetir.org/papers/JETIR2402116.pdf

ISSN


2349-5162 | Impact Factor 7.95 Calculate by Google Scholar

An International Scholarly Open Access Journal, Peer-Reviewed, Refereed Journal Impact Factor 7.95 Calculate by Google Scholar and Semantic Scholar | AI-Powered Research Tool, Multidisciplinary, Monthly, Multilanguage Journal Indexing in All Major Database & Metadata, Citation Generator

Cite This Article

"DIFFERENTIAL PRIVACY TECHNIQUES IN MACHINE LEARNING FOR ENHANCED PRIVACY PRESERVATION", International Journal of Emerging Technologies and Innovative Research (www.jetir.org | UGC and issn Approved), ISSN:2349-5162, Vol.11, Issue 2, page no. ppb148-b153, February-2024, Available at : http://www.jetir.org/papers/JETIR2402116.pdf

Publication Details

Published Paper ID: JETIR2402116
Registration ID: 532519
Published In: Volume 11 | Issue 2 | Year February-2024
DOI (Digital Object Identifier):
Page No: b148-b153
Country: North Brunswick, NJ, United States of America .
Area: Engineering
ISSN Number: 2349-5162
Publisher: IJ Publication


Preview This Article


Downlaod

Click here for Article Preview

Download PDF

Downloads

000474

Print This Page

Current Call For Paper

Jetir RMS