Article(electronic)January 12, 2024

Deep convolutional neural networks are not mechanistic explanations of object recognition

In: Synthese: an international journal for epistemology, methodology and philosophy of science, Volume 203, Issue 1

Checking availability at your location

Abstract

AbstractGiven the extent of using deep convolutional neural networks to model the mechanism of object recognition, it becomes important to analyse the evidence of their similarity and the explanatory potential of these models. I focus on one frequent method of their comparison—representational similarity analysis, and I argue, first, that it underdetermines these models as how-actually mechanistic explanations. This happens because different similarity measures in this framework pick out different mechanisms across DCNNs and the brain in order to correspond them, and there is no arbitration between them in terms of relevance for object recognition. Second, the reason similarity measures are underdetermining to a large degree stems from the highly idealised nature of these models, which undermines their status as how-possibly mechanistic explanatory models of object recognition as well. Thus, building models with more theoretical consideration and choosing relevant similarity measures may bring us closer to the goal of mechanistic explanation.

Languages

English

Publisher

Springer Science and Business Media LLC

ISSN: 1573-0964

DOI

10.1007/s11229-023-04461-3

Report Issue

If you have problems with the access to a found title, you can use this form to contact us. You can also use this form to write to us if you have noticed any errors in the title display.