Download PDFOpen PDF in browser

Improving the Generalization of Deep Neural Networks Through Regularization Techniques

EasyChair Preprint 15810

12 pagesDate: February 11, 2025

Abstract

Deep neural networks (DNNs) have demonstrated impressive performance across various domains, from computer vision to natural language processing. However, they are prone to overfitting, especially when the size of the training data is limited. Regularization techniques play a crucial role in improving the generalization ability of DNNs. In this paper, we explore various regularization methods, including L2 regularization, dropout, and batch normalization, to mitigate overfitting and improve model performance. We provide a mathematical analysis of each technique and evaluate their effectiveness on benchmark datasets such as CIFAR-10 and MNIST. Our results show that combining multiple regularization techniques significantly enhances the model's ability to generalize, achieving better performance on unseen data while maintaining computational efficiency.

Keyphrases: Algorithms, DNN, NLP, deep learning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:15810,
  author    = {James Kung and H Chung and Rene Gozalens and Che Hoo and Isabel Cheng},
  title     = {Improving the Generalization of Deep Neural Networks Through Regularization Techniques},
  howpublished = {EasyChair Preprint 15810},
  year      = {EasyChair, 2025}}
Download PDFOpen PDF in browser