| 121 | 0 | 118 |
| 下载次数 | 被引频次 | 阅读次数 |
文章借鉴马尔乔尼对理论模型无法提供解释的批判性分析,区分“解释”与“经验支持度”概念,强调只有前者才能评估模型是否具有解释性,并规范“解释性理解”概念的适用场景。基于此,文章反驳沙利文关于深度神经网络(DNN)模型能够提供目标现象解释性理解的观点,指出DNN模型与理论模型在解释功能上不具可比性,即使消除链接不确定(Link Uncertainty)也无法弥补DNN模型的机制黑箱问题。进一步结合延展认知视角,“建模者—理论模型”认知对能通过推理承担解释任务,而“建模者—DNN模型”认知对则拥有隐秘的私人学习过程,缺乏解释所需的机制。
Abstract:This paper builds upon Marchionni's critical analysis that theoretical models fail to provide explanations, distinguishing between the notion of “explanation” and that of “empirical support”, and highlighting that only the former serves as a genuine criterion for evaluating a model's explanatory power. It also clarifies the appropriate contexts for employing the concept of “explanatory understanding”. Based on this framework, the paper challenges Sullivan's recent claim that deep neural network(DNN) models can provide explanatory understanding of target phenomena. The paper argues that traditional scientific models and DNN models are not comparable in terms of their explanatory function, and that reducing link uncertainty does not suffice to enable a model to provide explanations. Rather, the black-box nature of DNN mechanisms poses a significant challenge to their explanatory capacity. Furthermore, adopting an extended cognition perspective, the paper contends that the “modeler-traditional scientific model” cognitive system can engage in explicit reasoning to fulfill explanatory and argumentative tasks, whereas the “modeler-DNN model” cognitive system remains embedded within private learning processes, lacking the necessary mechanistic visibility for explanation.
[1]SULLIVAN E.Understanding from machine learning models[J].The British journal for the philosophy of science,2022,73 (1).
[2]BUIJSMAN S.Causal scientific explanations from machine learning[J].Synthese,2023,202(6).
[3]TAMIR M,SHECH E.Understanding from deep learning models in context[C]// LAWLER I,KHALIFA K,SHECH E.Scientific understanding and representation modeling in the physical sciences.New York:Routledge Press,2022.
[4]RAZ T,BEISBART C.The importance of understanding deep learning[J].Erkenntnis,2024,89.
[5]REISS J.The explanation paradox[J].Journal of economic methodology,2012,19:49.
[6]ALEXANDROVA A,NORTHCOTT R.It’s just a feeling:why economic models do not explain[J].Journal of economic methodology,2013,20(3):262.
[7]RICE C.Moving beyond causes:optimality models and scientific explanation[J].No?s,2015,49(3):589-615:589.
[8]REUTLINGER A,HANGLEITER D,HARTMANN S.Understanding with(toy) models[J].British journal for the philosophy of science,2017,69(4):1095.
[9]MAKI U.On a paradox of truth,or how not to obscure the issue of whether explanatory models can be true[J].Journal of economic methodology,2013,20(3).
[10]YLIKOSKI P,AYDINONAT E.Understanding with theoretical models[J].Journal of economic methodology,2014,21(1):23-24.
[11]KUORIKOSKI J,YLIKOSKI P.External representations and scientific understanding[J].Synthese,2015,192.
[12]MARCHIONNI C.What is the problem with model-based explanation in economics?[J].Sciendo,2017,9(47).
[13]HAUSMAN D.Paradox postponed[J].Journal of economic methodology,2013,20(3):250.
[14]CHRISTOPH B,CLAUS B,GERG B.What is understanding?An overview of recent debates in epistemology and philosophy of science[C]// CHRISTOPH B,SABINE A.Explaining understanding:new perspectives from epistemology and philosophy of science.New York:Routledge,2017:13.
[15]HUGHES R I G.Models and representation[J].Philosophy of science,1997,64(4):S331.
[16]RUSSO F,WILLIANSON J.Interpreting causality in the health sciences[J].International studies in the philosophy of science,2007,21(2):159.
[17]CLARK A,CHALMERS D.The extended mind[J].Analysis,1998,58(1):7-19.
[18]KUORIKOSKI J,LEHTINEN A.Incredible worlds,credible results[J].Erkenntnis,2009,70:122.
(1)为行文方便,下文中凡提及“模型”,均特指“理论模型”。
(2)笔者添加。
(3)这取决于采纳何种解释观。但无论采用哪种,谢林模型都能够满足。
(4)后续的经验证据可能改变该模型的经验支持度。
基本信息:
中图分类号:TP183
引用信息:
[1]向盾.深度神经网络模型的认识论反思——基于解释与经验支持度的区分[J].科学技术哲学研究,2025,42(06):24-30.
基金信息:
天津市哲学社会科学规划项目“模型表征的视角主义研究”(TJZXQN23-001)