Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Compressive strength prediction of ternary-blended concrete using deep neural network with tuned hyperparameters

Authors
Choi, Ju-HeeKim, DongyounKo, Min-SamLee, Dong-EunWi, KwangwooLee, Han-Seung
Issue Date
Sep-2023
Publisher
Elsevier BV
Keywords
Compressive strength; Concrete; Deep neural network; Hyperparameter tuning; Mix proportion
Citation
Journal of Building Engineering, v.75, pp 1 - 15
Pages
15
Indexed
SCIE
SCOPUS
Journal Title
Journal of Building Engineering
Volume
75
Start Page
1
End Page
15
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115126
DOI
10.1016/j.jobe.2023.107004
ISSN
2352-7102
2352-7102
Abstract
Studies have been conducted to predict the compressive strength of ternary-blended concrete using regression models, such as support vector regression (SVR), random forest (RF), and artificial neural networks. In particular, deep neural networks (DNNs) are one of the most effective nonlinear regression models for predicting compressive strength based on the intricate relationships among constituent materials. However, because the DNN depends on training data, previous studies presented different varieties of ternary-blended concrete, suggesting that appropriate training is required. In addition, the analysis of optimal hyperparameter processes to ensure model performance has not been conducted extensively. This study established appropriate DNN models tuned with hyperparameters to effectively predict the compressive strength of ternary-blended concrete. The dataset used in this study was a set of 775 on-site mix proportions of ternary-blended concrete provided by a ready-mix concrete company in South Korea. Water, cement, fine aggregate, coarse aggregate, fly ash, blast furnace slag, curing temperature, and curing humidity were the inputs, and the compressive strength was set as the output. The basic statistical characteristics of the data used and the performance evaluation (mean square error [MSE] and mean absolute error [MAE]) of 15 models with hidden layers and units as variables were analyzed to determine the network architecture. Based on the model with the selected structure, hyperparameter tuning (batch size, dropout, and batch normalization) was applied to improve the performance of the DNN model. The advanced DNN model exhibited 18% lower MAE losses and 27% lower MSE losses than the conventional DNN model. In addition, based on the MAE losses and MSE losses, the advanced DNN model showed 4% and 12% lower errors than those of SVR, and 11% and 15% lower errors than those of RF, indicating that the DNN model with hyperparameter tuning performed better than the other models. © 2023 Elsevier Ltd
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > MAJOR IN ARCHITECTURAL ENGINEERING > 1. Journal Articles
COLLEGE OF COMPUTING > SCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Han Seung photo

Lee, Han Seung
ERICA 공학대학 (MAJOR IN ARCHITECTURAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE