Deep convolutional neural networks are not mechanistic explanations of object recognition
In: Synthese: an international journal for epistemology, methodology and philosophy of science, Volume 203, Issue 1
Abstract
AbstractGiven the extent of using deep convolutional neural networks to model the mechanism of object recognition, it becomes important to analyse the evidence of their similarity and the explanatory potential of these models. I focus on one frequent method of their comparison—representational similarity analysis, and I argue, first, that it underdetermines these models as how-actually mechanistic explanations. This happens because different similarity measures in this framework pick out different mechanisms across DCNNs and the brain in order to correspond them, and there is no arbitration between them in terms of relevance for object recognition. Second, the reason similarity measures are underdetermining to a large degree stems from the highly idealised nature of these models, which undermines their status as how-possibly mechanistic explanatory models of object recognition as well. Thus, building models with more theoretical consideration and choosing relevant similarity measures may bring us closer to the goal of mechanistic explanation.
Citations
We have found one citation for you at OpenAlex.
We have found citations for you at OpenAlex.
References
We have found one reference for you at OpenAlex.
We have found references for you at OpenAlex.
Languages
English
Publisher
Springer Science and Business Media LLC
ISSN: 1573-0964
DOI
Report Issue