Agrippa's Trilemma Revisited: Opacity, Circularity, and Structural Dogmatism in High-Dimensional Algorithmic Models

El trilema de Agripa revisitado: opacidad, circularidad y dogmatismo estructural en modelos algorítmicos de alta dimensión

Palabras clave: Agrippa’s Trilemma, Epistemic opacity, Algorithmic justification, Structural-pragmatic epistemology, Epistemic accountability in AI

Resumen

This article reviews Agrippa's trilemma in the context of artificial intelligence systems, specifically in high-dimensional algorithmic models: the classic epistemological challenge is that any justification of knowledge inevitably leads to infinite regression, circular reasoning, or arbitrary or dogmatic foundations, and therefore knowledge is not possible. This resurfaces strongly in machine learning models characterized by opacity, recursive optimization, and performance-based validation. Large language models and recommendation systems are paradigmatic cases; thus, the study shows how algorithmic inference often operates without semantic grounding or explicit logical structure, or with access to reasons intelligible to humans, thereby generating statistically robust but epistemically opaque results. It is argued that the trilemma appears algorithmically in the combination of three interrelated phenomena, namely: 1) opacity caused by the architecture of machine learning models and decision pathways themselves, 2) circularity, when training, validation, and performance measures feed back into each other without external epistemic reference, and 3) structural dogmatism, which involves taking for granted that high-performance results are correspondently true, even though the inside of the “black box” cannot be visualized. On this basis, the article proposes a structural-pragmatic epistemology, suggesting that justification is interpreted not negatively in terms of access to internal reasons, but positively in terms of satisfying the minimum requirements of coherence, traceability, and human accountability. The paper argues that justification in AI, both in relation to human users (as end recipients) and among AI agents, must be situated—accountable and corrigible—within a sociotechnical system that can ensure epistemic legitimacy without, therefore, presupposing total transparency or ideal rational subjects. Finally, it is argued that epistemic accountability in AI requires technical robustness on the one hand, and on the other, constant oversight based on philosophical and normative reflection on its outcomes and consequences.

Descargas

La descarga de datos todavía no está disponible.

Biografía del autor/a

Fabio Morandín-Ahuerma, Benemérita Universidad Autónoma de Puebla, Puebla - México

Doctor (Cum Laude) en Filosofía y miembro del Sistema Nacional de Investigadores (SNII) Nivel 1 (México).

Sus principales líneas de investigación se centran en las oportunidades y los riesgos asociados al desarrollo de modelos avanzados de aprendizaje automático, la ética algorítmica, el papel de las organizaciones internacionales públicas y privadas en la gobernanza de la IA y los dilemas éticos de las corporaciones tecnológicas globales.

Es autor de cuatro libros con arbitraje y 50 artículos individuales y colectivos. Entre sus obras más destacadas se incluyen «Inteligencia Artificial: Oportunidades y Desafíos para la Educación» (Concytep, 2024); «Principios Normativos para una Ética de la Inteligencia Artificial» (Concytep, 2023); y «Neuroética Fundamental y Teoría de la Decisión» (Concytep, 2021).

Realizó una estancia postdoctoral en el Centro de Investigaciones Filosóficas (CIF) en Buenos Aires, Argentina.

Licenciado en Filosofía por la Universidad Veracruzana, donde impartió docencia durante más de 15 años.

Actualmente es profesor investigador de tiempo completo en la Benemérita Universidad Autónoma de Puebla (BUAP) y líder del Cuerpo Académico “Estudios Regionales Transdisciplinarios BUAP-CA-354”.

Ha dirigido diversos proyectos de investigación y más de 60 tesis de licenciatura y posgrado.

Es editor en jefe de la Revista Multidisciplinaria de Ciencia Básica, Humanidades, Arte y Educación y revisor de las revistas Springer-Nature y Taylor & Francis. Premio “Arte, Ciencia y Luz”.

Citas

Albert, H. (1968). Traktat über kritische Vernunft. J.C.B. Mohr (Paul Siebeck).

Appriou, T., Rullière, D., & Gaudrie, D. (2024). High-dimensional Bayesian optimization with a combination of Kriging models. Structural and Multidisciplinary Optimization, 67, 196. https://doi.org/10.1007/s00158-024-03906-8

Atkinson, D., & Peijnenburg, J. (2017). Epistemic Justification. In Fading Foundations (Synthese Library, Vol. 383). Springer, Cham. https://doi.org/10.1007/978-3-319-58295-5_2

Atkinson, D., & Peijnenburg, J. (2017). Loops and Networks. In Fading Foundations (Synthese Library, Vol. 383). Springer, Cham. https://doi.org/10.1007/978-3-319-58295-5_8

Atkinson, D., & Peijnenburg, J. (2017). The Probabilistic Regress. In Fading Foundations (Synthese Library, Vol. 383). Springer, Cham. https://doi.org/10.1007/978-3-319-58295-5_3

Balayla, J. (2024). Applications in Bayesian Epistemology and Artificial Intelligence (AI). In Theorems on the Prevalence Threshold and the Geometry of Screening Curves. Springer, Cham. https://doi.org/10.1007/978-3-031-71452-8_11

Barad, K. (2007). Meeting the universe halfway: Quantum physics and the entanglement of matter and meaning. Duke University Press.

Bender, E. M., & Koller, A. (2020, July). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics (pp. 5185-5198).

https://aclanthology.org/2020.acl-main.463.pdf

Bratman, M. E. (1992). Practical reason and acceptance in a context. Ethics, 102(3), 441–456. https://www.jstor.org/stable/2254116

Canale, D. (2021). The Opacity of Law: On the Hidden Impact of Experts’ Opinion on Legal Decision-making. Law and Philosophy, 40, 509–543. https://doi.org/10.1007/s10982-021-09408-8

Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. W. W. Norton & Company.

Conner, W. (2024). Radical epistemology, theory choice, and the priority of the epistemic. Synthese, 203, 33. https://doi.org/10.1007/s11229-023-04448-0

Côté-Bouchard, C. (2024b). Should we Trust Our Feeds? Social Media, Misinformation, and the Epistemology of Testimony. Topoi, 43, 1469–1486. https://doi.org/10.1007/s11245-024-10116-w

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs. Yale University Press.

Diakopoulos, N. (2016). Algorithmic accountability: Journalistic investigation of black boxes. Digital Journalism, 4(7), 802-811. https://doi.org/10.7916/D8ZK5TW2

Durán, J. M., Sand, M., & Jongsma, K. (2022). The ethics and epistemology of explanatory AI in medicine and healthcare. Ethics and Information Technology, 24 , 42. https://doi.org/10.1007/s10676-022-09666-7

Daoust, M. K., & Côté-Bouchard, C. (2023). Epistemic Consequentialism, Veritism, and Scoring Rules. Erkenntnis, 88, 1741–1765. https://doi.org/10.1007/s10670-021-00426-5

Domingos, P. (2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books.

Dong, Y., & Zhou, X. (2025). Ah-knockoff: false discovery rate control in high-dimensional additive hazards models. Journal of the Korean Statistical Society. Advance online publication. https://doi.org/10.1007/s42952-025-00317-3

Durán, J. M., Sand, M., & Jongsma, K. (2022). The ethics and epistemology of explanatory AI in medicine and healthcare. Ethics and Information Technology, 24, 42. https://doi.org/10.1007/s10676-022-09666-7

Fassio, D., Tang, W. H., & Ye, R. (2024). Introduction to Current Themes in Epistemology: Asian Epistemology Network. Asian Journal of Philosophy, 3, 87. https://doi.org/10.1007/s44204-024-00218-y

Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford University Press.

Floridi, L. (2023). AI as Agency Without Intelligence: on ChatGPT, Large Language Models, and Other Generative Models. Philosophy & Technology, 36(15), 1-7. https://doi.org/10.1007/s13347-023-00621-y

Floridi, L. (2024). Introduction to the Special Issues: The Ethics of Artificial Intelligence: Exacerbated Problems, Renewed Problems, Unprecedented Problems. American Philosophical Quarterly, 61(4), 301-307. https://doi.org/10.5406/21521123.61.4.01

Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3), 50-57. https://doi.org/10.1609/aimag.v38i3.2741

Haraway, D. J. (1991). Simians, cyborgs, and women: The reinvention of nature. Routledge.

Hayles, N. K. (2017). Unthought: The Power of the Cognitive Nonconscious. University of Chicago Press.

Heersmink, R., de Rooij, B., Clavel Vázquez, M. J., et al. (2024). A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness. Ethics and Information Technology, 26, 41. https://doi.org/10.1007/s10676-024-09777-3

Huang, J., & Wu, Y. (2024). High-dimensional robust inference for censored linear models. Science China Mathematics, 67, 891–918. https://doi.org/10.1007/s11425-022-2070-2

Ladyman, J., & Ross, D. (2007). Every Thing Must Go: Metaphysics Naturalized. Oxford University Press.

Lehrer, K. (1990). Theory of Knowledge. Routledge.

Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43. https://doi.org/10.1145/3233231

Liu, B. (2025). Demystifying the black box: AI-enhanced logistic regression for lead scoring. Applied Intelligence, 55, 574. https://doi.org/10.1007/s10489-025-06430-4

Longino, H. E. (2020). Interaction: a case for ontological pluralism. Interdisciplinary Science Reviews, 45(3), 432-445. https://doi.org/10.1080/03080188.2020.1794385

Ma, J., & Yang, S. (2024). High-dimensional stochastic control models for newsvendor problems and deep learning resolution.3 Annals of Operations Research, 339, 789–811. https://doi.org/10.1007/s10479-024-05872-2

Ma, W., & Valton, V. (2024). Toward an Ethics of AI Belief. Philosophy & Technology, 37, 76. https://doi.org/10.1007/s13347-024-00762-8

Machuca, D. E. (2022). The Agrippan Modes and the Challenge of Disagreement. In Pyrrhonism Past and Present (Synthese Library, Vol. 450). Springer, Cham. https://doi.org/10.1007/978-3-030-91210-9_4

MacKenzie, A. (2023). Postdigital Epistemology. In P. Jandrić (Ed.), Encyclopedia of Postdigital Science and Education. Springer, Cham. https://doi.org/10.1007/978-3-031-35469-4_9-1

MacKenzie, D. (2023). Trading at the speed of light: Thoughts on automation, explanation, and epistemology in financial AI. Theory, Culture & Society, 40(1), 125–144. https://doi.org/10.1177/02632764221113422

Mao, J., Gao, Z., Jing, B. Y., et al. (2024). On the statistical analysis of high-dimensional factor models. Statistical Papers, 65, 4991–5019. https://doi.org/10.1007/s00362-024-01557-x

Merwe, R. (2025). Perspectives and meta-perspectives: context versus hierarchy in the epistemology of complex systems. European Journal for Philosophy of Science, 15, 14. https://doi.org/10.1007/s13194-025-00641-9

Minazzi, F. (2022). For a Historical-Critical Epistemology. From the Criticism of Epistemology to Critical Epistemology. In Historical Epistemology and European Philosophy of Science (Studies in Applied Philosophy, Epistemology and Rational Ethics, Vol. 62). Springer, Cham. https://doi.org/10.1007/978-3-030-96332-3_3

Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279-288). https://doi.org/10.48550/arXiv.1811.01439

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), https://doi.org/10.1177/2053951716679

Nescolarde-Selva, J. A., Usó-Doménech, J. L., Segura-Abad, L., et al. (2025). Beliefs, Epistemic Regress and Doxastic Justification. Foundations of Science, 30, 109–147. https://doi.org/10.1007/s10699-023-09927-8

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.

Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: what it is and how it works. AI and Society, 39, 1871–1882. https://doi.org/10.1007/s00146-023-01635-y

O'Connor, C., & Weatherall, J. O. (2018). The Misinformation Age: How False Beliefs Spread in the Age of AI. Yale University Press.

Ortmann, J. (2025). Of opaque oracles: epistemic dependence on AI in science poses no novel problems for social epistemology. Synthese, 205, 80. https://doi.org/10.1007/s11229-025-04930-x

Popkin, R. H. (2003). The history of scepticism from Savonarola to Bayle (Revised Edition). Oxford University Press.

Ravish, S., & Sirola, V. S. (2023). Can Social Reflective Equilibrium Delineate Cornell Realist Epistemology? Philosophia, 51, 2015–2033. https://doi.org/10.1007/s11406-023-00654-9

Rossi, E. (2025). What Can Epistemic Normativity Tell us About Politics? Ideology, Power, and the Epistemology of Radical Realism. Topoi, 44, 77–88. https://doi.org/10.1007/s11245-024-10142-8

Russo, F., Schliesser, E., & Wagemans, J. (2024). Connecting ethics and epistemology of AI. AI and Society, 39, 1585–1603. https://doi.org/10.1007/s00146-022-01617-6

Schurz, G. (2021). Der Begriff des Wissens. In: Erkenntnistheorie. J.B. Metzler, Stuttgart. https://doi.org/10.1007/978-3-476-04755-7_2

Schuster, N. & Lazar, S. (2025). Attention, moral skill, and algorithmic recommendation. Philosophical Studies, 182, 159–184. https://doi.org/10.1007/s11098-023-02083-6

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457. https://doi.org/10.1017/S0140525X00005756

Shi, H., Yang, W., Sun, B., & Guo, X. (2025). Tests for high-dimensional partially linear regression models. Statistical Papers, 66, 59. https://doi.org/10.1007/s00362-025-01679-w

Sinclair, R. (2023). Précis of Quine, Conceptual Pragmatism, and the Analytic-Synthetic Distinction. Asian Journal of Philosophy, 2, 63. https://doi.org/10.1007/s44204-023-00115-w

Stiegler, B. (1994). La technique et le temps. Tome 1: La faute d’Épiméthée. Galilée.

Stiegler, B. (2004). De la misère symbolique. Tome 1: L’époque hyperindustrielle. Galilée.

Vallverdú, J. (2024). Generative AI and Causality. In Causality for Artificial Intelligence. Springer, Singapore. https://doi.org/10.1007/978-981-97-3187-9_6

Vivas-Reyes, R. (2024). Clashing perspectives: Kantian epistemology and quantum chemistry theory. Foundations of Chemistry, 26, 291–300. https://doi.org/10.1007/s10698-024-09508-y

Vuković, D. B., Dekpo-Adza, S., & Matović, S. (2025). AI integration in financial services: a systematic review of trends and regulatory challenges. Humanities and Social Sciences Communications, 12(1), 562. https://doi.org/10.1057/s41599-025-04850-8

Wachter, S., Mittelstadt, B. D., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99.1 https://doi.org/10.1093/idpl/ipx005

Ye, F. (2023). Epistemology and Methodology. In Studies in No-Self Physicalism. Springer, Singapore. https://doi.org/10.1007/978-981-19-8143-2_6

Yu, K., Guo, X., & Luo, S. (2025). Group inference for high-dimensional mediation models. Statistical Computation, 35, 61. https://doi.org/10.1007/s11222-025-10591-0

Zanboori, A., Zanboori, E., Mousavi, M., et al. (2024). Bayesian stein-type shrinkage estimators in high-dimensional linear regression models. São Paulo Journal of Mathematical Sciences, 18, 1889–1914. https://doi.org/10.1007/s40863-024-00473-0

Zhang, X., & Li, Z. (2024). Linear hypothesis testing in ultra high dimensional generalized linear mixed models. Journal of the Korean Statistical Society, 53, 791–814. https://doi.org/10.1007/s42952-024-00268-1

Zhou, Y., Zhang, X., & Kwong, S. (2025). Introduction of High Dimensional Machine Learning. In Computational Intelligence for High-Dimensional Machine Learning (SpringerBriefs in Computer Science). Springer, Singapore. https://doi.org/10.1007/978-981-96-2687-8_1

Zoglauer, T. (2023). Post-Truth Epistemology. In Constructed Truths. Springer Vieweg, Wiesbaden. https://doi.org/10.1007/978-3-658-39942-9_2

Publicado
2026-03-21
Cómo citar
Fabio Morandín-Ahuerma. (2026). Agrippa’s Trilemma Revisited: Opacity, Circularity, and Structural Dogmatism in High-Dimensional Algorithmic Models: El trilema de Agripa revisitado: opacidad, circularidad y dogmatismo estructural en modelos algorítmicos de alta dimensión. Revista De Filosofía, 43(115), 95-109. https://doi.org/10.5281/zenodo.19835158