Aufsatz(elektronisch)12. Januar 2024

Deep convolutional neural networks are not mechanistic explanations of object recognition

In: Synthese: an international journal for epistemology, methodology and philosophy of science, Band 203, Heft 1

Verfügbarkeit an Ihrem Standort wird überprüft

Abstract

AbstractGiven the extent of using deep convolutional neural networks to model the mechanism of object recognition, it becomes important to analyse the evidence of their similarity and the explanatory potential of these models. I focus on one frequent method of their comparison—representational similarity analysis, and I argue, first, that it underdetermines these models as how-actually mechanistic explanations. This happens because different similarity measures in this framework pick out different mechanisms across DCNNs and the brain in order to correspond them, and there is no arbitration between them in terms of relevance for object recognition. Second, the reason similarity measures are underdetermining to a large degree stems from the highly idealised nature of these models, which undermines their status as how-possibly mechanistic explanatory models of object recognition as well. Thus, building models with more theoretical consideration and choosing relevant similarity measures may bring us closer to the goal of mechanistic explanation.

Sprachen

Englisch

Verlag

Springer Science and Business Media LLC

ISSN: 1573-0964

DOI

10.1007/s11229-023-04461-3

Problem melden

Wenn Sie Probleme mit dem Zugriff auf einen gefundenen Titel haben, können Sie sich über dieses Formular gern an uns wenden. Schreiben Sie uns hierüber auch gern, wenn Ihnen Fehler in der Titelanzeige aufgefallen sind.