Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Which deep learning model can best explain object representations of within-category exemplars?

Full metadata record
DC Field Value Language
dc.contributor.authorLee, Dongha-
dc.date.accessioned2023-08-16T09:31:14Z-
dc.date.available2023-08-16T09:31:14Z-
dc.date.created2022-01-11-
dc.date.issued2021-09-
dc.identifier.issn1534-7362-
dc.identifier.urihttp://scholarworks.bwise.kr/kbri/handle/2023.sw.kbri/299-
dc.description.abstractDeep neural network (DNN) models realize human-equivalent performance in tasks such as object recognition. Recent developments in the field have enabled testing the hierarchical similarity of object representation between the human brain and DNNs. However, the representational geometry of object exemplars within a single category using DNNs is unclear. In this study, we investigate which DNN model has the greatest ability to explain invariant within-category object representations by computing the similarity between representational geometries of visual features extracted at the high-level layers of different DNN models. We also test for the invariability of within-category object representations of these models by identifying object exemplars. Our results show that transfer learning models based on ResNet50 best explained both within-category object representation and object identification. These results suggest that the invariability of object representations in deep learning depends not on deepening the neural network but on building a better transfer learning model.-
dc.language영어-
dc.language.isoen-
dc.publisherASSOC RESEARCH VISION OPHTHALMOLOGY INC-
dc.titleWhich deep learning model can best explain object representations of within-category exemplars?-
dc.typeArticle-
dc.contributor.affiliatedAuthorLee, Dongha-
dc.identifier.doi10.1167/jov.21.10.12-
dc.identifier.scopusid2-s2.0-85116204050-
dc.identifier.wosid000708879800004-
dc.identifier.bibliographicCitationJOURNAL OF VISION, v.21, no.10-
dc.relation.isPartOfJOURNAL OF VISION-
dc.citation.titleJOURNAL OF VISION-
dc.citation.volume21-
dc.citation.number10-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaOphthalmology-
dc.relation.journalWebOfScienceCategoryOphthalmology-
dc.subject.keywordPlusNEURAL-NETWORKS-
dc.subject.keywordPlusNEURONS-
dc.subject.keywordPlusBRAIN-
dc.subject.keywordPlusRECOGNITION-
dc.subject.keywordPlusKNOWLEDGE-
dc.subject.keywordPlusIDENTITY-
dc.subject.keywordPlusDORSAL-
dc.subject.keywordAuthorinvariant object representations-
dc.subject.keywordAuthordeep neural networks-
dc.subject.keywordAuthorobject exemplars-
dc.subject.keywordAuthorrepresentation similarity-
dc.subject.keywordAuthoridentification accuracy-
Files in This Item
There are no files associated with this item.
Appears in
Collections
연구본부 > 인지과학 연구그룹 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Dongha photo

Lee, Dongha
연구본부 (인지과학 연구그룹)
Read more

Altmetrics

Total Views & Downloads

BROWSE