UNLOCKING THE POWER OF L1 REGULARIZATION: A NOVEL APPROACH TO TAMING OVERFITTING IN CNN FOR IMAGE CLASSIFICATIONopen access
- Authors
- 이영문
- Issue Date
- Sep-2025
- Publisher
- PUBLIC LIBRARY SCIENCE
- Keywords
- Article; Classification; Convolutional Neural Network; Deep Learning; Diagnosis; Human; Mango; Plant Leaf
- Citation
- PLOS ONE, v.20, no.9 September, pp 1 - 10
- Pages
- 10
- Indexed
- SCIE
SCOPUS
- Journal Title
- PLOS ONE
- Volume
- 20
- Number
- 9 September
- Start Page
- 1
- End Page
- 10
- URI
- https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/126130
- DOI
- 10.1371/journal.pone.0327985
- ISSN
- 1932-6203
1932-6203
- Abstract
- Convolutional Neural Networks (CNNs) stand as indispensable tools in deep learning, capable of autonomously extracting crucial features from diverse data types. However, the intricacies of CNN architectures can present challenges such as overfitting and underfitting, necessitating thoughtful strategies to optimize their performance. In this work, these issues have been resolved by introducing L1 regularization in the basic architecture of CNN when it is applied for image classification. The proposed model has been applied to three different datasets. It has been observed that incorporating L1 regularization with different coefficient values has distinct effects on the working mechanism of CNN architecture resulting in improving its performance. In MNIST digit classification, L1 regularization (coefficient: 0.01) simplifies feature representation and prevents overfitting, leading to enhanced accuracy. In the Mango Tree Leaves dataset, dual L1 regularization (coefficient: 0.001 for convolutional and 0.01 for dense layers) improves model interpretability and generalization, facilitating effective leaf classification. Additionally, for hand-drawn sketches like those in the Quick, Draw! Dataset, L1 regularization (coefficient: 0.001) refines feature representation, resulting in improved recognition accuracy and generalization across diverse sketch categories. These findings underscore the significance of regularization techniques like L1 regularization in fine-tuning CNNs, optimizing their performance, and ensuring their adaptability to new data while maintaining high accuracy. Such strategies play a pivotal role in advancing the utility of CNNs across various domains, further solidifying their position as a cornerstone of deep learning.
- Files in This Item
-
Go to Link
- Appears in
Collections - COLLEGE OF ENGINEERING SCIENCES > DEPARTMENT OF ROBOT ENGINEERING > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.